Oct 2 19:14:45.180956 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:14:45.180992 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:14:45.181015 kernel: efi: EFI v2.70 by EDK II Oct 2 19:14:45.181030 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:14:45.181044 kernel: ACPI: Early table checksum verification disabled Oct 2 19:14:45.181057 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:14:45.181073 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:14:45.181087 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:14:45.181101 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:14:45.181115 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:14:45.181166 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:14:45.181182 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:14:45.181196 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:14:45.181210 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:14:45.181227 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:14:45.181246 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:14:45.181261 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:14:45.181368 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:14:45.181605 kernel: printk: bootconsole [uart0] enabled Oct 2 19:14:45.181624 kernel: NUMA: Failed to initialise from firmware Oct 2 19:14:45.181639 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:14:45.181654 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:14:45.181668 kernel: Zone ranges: Oct 2 19:14:45.181683 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:14:45.181697 kernel: DMA32 empty Oct 2 19:14:45.181711 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:14:45.181731 kernel: Movable zone start for each node Oct 2 19:14:45.181745 kernel: Early memory node ranges Oct 2 19:14:45.181759 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:14:45.181774 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:14:45.181788 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:14:45.181802 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:14:45.181816 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:14:45.181830 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:14:45.181845 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:14:45.181859 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:14:45.181873 kernel: psci: probing for conduit method from ACPI. Oct 2 19:14:45.181887 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:14:45.181905 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:14:45.181920 kernel: psci: Trusted OS migration not required Oct 2 19:14:45.181941 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:14:45.181956 kernel: ACPI: SRAT not present Oct 2 19:14:45.181972 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:14:45.181991 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:14:45.182006 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:14:45.182021 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:14:45.182036 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:14:45.182051 kernel: CPU features: detected: Spectre-v2 Oct 2 19:14:45.182066 kernel: CPU features: detected: Spectre-v3a Oct 2 19:14:45.182081 kernel: CPU features: detected: Spectre-BHB Oct 2 19:14:45.182096 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:14:45.182111 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:14:45.182240 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:14:45.182259 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:14:45.182279 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:14:45.182295 kernel: Policy zone: Normal Oct 2 19:14:45.182313 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:14:45.182329 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:14:45.182344 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:14:45.182359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:14:45.182374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:14:45.182390 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:14:45.182405 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:14:45.182421 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:14:45.182439 kernel: trace event string verifier disabled Oct 2 19:14:45.182454 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:14:45.182470 kernel: rcu: RCU event tracing is enabled. Oct 2 19:14:45.182486 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:14:45.182501 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:14:45.182516 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:14:45.182532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:14:45.182547 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:14:45.182561 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:14:45.182576 kernel: GICv3: 96 SPIs implemented Oct 2 19:14:45.182591 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:14:45.182689 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:14:45.182714 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:14:45.182780 kernel: GICv3: 16 PPIs implemented Oct 2 19:14:45.182798 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:14:45.182813 kernel: ACPI: SRAT not present Oct 2 19:14:45.182828 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:14:45.182843 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:14:45.182859 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:14:45.182874 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:14:45.182889 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:14:45.182904 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:14:45.182919 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:14:45.182939 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:14:45.182955 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:14:45.182970 kernel: Console: colour dummy device 80x25 Oct 2 19:14:45.182986 kernel: printk: console [tty1] enabled Oct 2 19:14:45.183001 kernel: ACPI: Core revision 20210730 Oct 2 19:14:45.183017 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:14:45.183033 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:14:45.183048 kernel: LSM: Security Framework initializing Oct 2 19:14:45.183064 kernel: SELinux: Initializing. Oct 2 19:14:45.183079 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:14:45.183099 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:14:45.183115 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:14:45.183179 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:14:45.183196 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:14:45.183211 kernel: Remapping and enabling EFI services. Oct 2 19:14:45.183227 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:14:45.183243 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:14:45.183258 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:14:45.183274 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:14:45.183296 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:14:45.183311 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:14:45.190842 kernel: SMP: Total of 2 processors activated. Oct 2 19:14:45.190868 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:14:45.190884 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:14:45.190900 kernel: CPU features: detected: CRC32 instructions Oct 2 19:14:45.190915 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:14:45.190930 kernel: alternatives: patching kernel code Oct 2 19:14:45.190946 kernel: devtmpfs: initialized Oct 2 19:14:45.190968 kernel: KASLR disabled due to lack of seed Oct 2 19:14:45.190984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:14:45.191001 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:14:45.191026 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:14:45.191046 kernel: SMBIOS 3.0.0 present. Oct 2 19:14:45.191062 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:14:45.191078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:14:45.191094 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:14:45.191110 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:14:45.191145 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:14:45.191164 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:14:45.191181 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Oct 2 19:14:45.191202 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:14:45.191218 kernel: cpuidle: using governor menu Oct 2 19:14:45.191234 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:14:45.191251 kernel: ASID allocator initialised with 32768 entries Oct 2 19:14:45.191267 kernel: ACPI: bus type PCI registered Oct 2 19:14:45.191287 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:14:45.191303 kernel: Serial: AMBA PL011 UART driver Oct 2 19:14:45.191319 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:14:45.191335 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:14:45.191351 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:14:45.191367 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:14:45.191383 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:14:45.191399 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:14:45.191415 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:14:45.191436 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:14:45.191452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:14:45.191468 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:14:45.191484 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:14:45.191500 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:14:45.191516 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:14:45.191532 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:14:45.191548 kernel: ACPI: Interpreter enabled Oct 2 19:14:45.191564 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:14:45.191583 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:14:45.191600 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:14:45.191982 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:14:45.192205 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:14:45.192400 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:14:45.192589 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:14:45.192778 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:14:45.192806 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:14:45.192823 kernel: acpiphp: Slot [1] registered Oct 2 19:14:45.192839 kernel: acpiphp: Slot [2] registered Oct 2 19:14:45.192856 kernel: acpiphp: Slot [3] registered Oct 2 19:14:45.192871 kernel: acpiphp: Slot [4] registered Oct 2 19:14:45.192888 kernel: acpiphp: Slot [5] registered Oct 2 19:14:45.192903 kernel: acpiphp: Slot [6] registered Oct 2 19:14:45.192919 kernel: acpiphp: Slot [7] registered Oct 2 19:14:45.192935 kernel: acpiphp: Slot [8] registered Oct 2 19:14:45.192955 kernel: acpiphp: Slot [9] registered Oct 2 19:14:45.192971 kernel: acpiphp: Slot [10] registered Oct 2 19:14:45.192988 kernel: acpiphp: Slot [11] registered Oct 2 19:14:45.193007 kernel: acpiphp: Slot [12] registered Oct 2 19:14:45.193023 kernel: acpiphp: Slot [13] registered Oct 2 19:14:45.193038 kernel: acpiphp: Slot [14] registered Oct 2 19:14:45.193054 kernel: acpiphp: Slot [15] registered Oct 2 19:14:45.193070 kernel: acpiphp: Slot [16] registered Oct 2 19:14:45.193086 kernel: acpiphp: Slot [17] registered Oct 2 19:14:45.193102 kernel: acpiphp: Slot [18] registered Oct 2 19:14:45.193141 kernel: acpiphp: Slot [19] registered Oct 2 19:14:45.193160 kernel: acpiphp: Slot [20] registered Oct 2 19:14:45.193177 kernel: acpiphp: Slot [21] registered Oct 2 19:14:45.193193 kernel: acpiphp: Slot [22] registered Oct 2 19:14:45.193209 kernel: acpiphp: Slot [23] registered Oct 2 19:14:45.193225 kernel: acpiphp: Slot [24] registered Oct 2 19:14:45.193240 kernel: acpiphp: Slot [25] registered Oct 2 19:14:45.193256 kernel: acpiphp: Slot [26] registered Oct 2 19:14:45.193272 kernel: acpiphp: Slot [27] registered Oct 2 19:14:45.193293 kernel: acpiphp: Slot [28] registered Oct 2 19:14:45.193309 kernel: acpiphp: Slot [29] registered Oct 2 19:14:45.193325 kernel: acpiphp: Slot [30] registered Oct 2 19:14:45.193341 kernel: acpiphp: Slot [31] registered Oct 2 19:14:45.193357 kernel: PCI host bridge to bus 0000:00 Oct 2 19:14:45.193558 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:14:45.193736 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:14:45.193909 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:14:45.194084 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:14:45.194340 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:14:45.194568 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:14:45.194791 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:14:45.195000 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:14:45.195234 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:14:45.195458 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:14:45.195755 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:14:45.195994 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:14:45.196252 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:14:45.196459 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:14:45.196656 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:14:45.196852 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:14:45.197059 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:14:45.197352 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:14:45.197550 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:14:45.197748 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:14:45.197928 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:14:45.198102 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:14:45.198351 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:14:45.198382 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:14:45.198399 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:14:45.198416 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:14:45.198433 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:14:45.198449 kernel: iommu: Default domain type: Translated Oct 2 19:14:45.198465 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:14:45.198481 kernel: vgaarb: loaded Oct 2 19:14:45.198497 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:14:45.198513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:14:45.198533 kernel: PTP clock support registered Oct 2 19:14:45.198550 kernel: Registered efivars operations Oct 2 19:14:45.198566 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:14:45.198582 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:14:45.198598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:14:45.198633 kernel: pnp: PnP ACPI init Oct 2 19:14:45.198846 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:14:45.198871 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:14:45.198887 kernel: NET: Registered PF_INET protocol family Oct 2 19:14:45.198909 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:14:45.198926 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:14:45.198942 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:14:45.198958 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:14:45.198975 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:14:45.198991 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:14:45.199007 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:14:45.199024 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:14:45.199040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:14:45.199061 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:14:45.199077 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:14:45.199093 kernel: kvm [1]: HYP mode not available Oct 2 19:14:45.199109 kernel: Initialise system trusted keyrings Oct 2 19:14:45.199148 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:14:45.199168 kernel: Key type asymmetric registered Oct 2 19:14:45.199184 kernel: Asymmetric key parser 'x509' registered Oct 2 19:14:45.199201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:14:45.199217 kernel: io scheduler mq-deadline registered Oct 2 19:14:45.199239 kernel: io scheduler kyber registered Oct 2 19:14:45.199255 kernel: io scheduler bfq registered Oct 2 19:14:45.199463 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:14:45.199488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:14:45.199505 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:14:45.199521 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:14:45.199538 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:14:45.199728 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:14:45.199755 kernel: printk: console [ttyS0] disabled Oct 2 19:14:45.199773 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:14:45.199789 kernel: printk: console [ttyS0] enabled Oct 2 19:14:45.199806 kernel: printk: bootconsole [uart0] disabled Oct 2 19:14:45.199822 kernel: thunder_xcv, ver 1.0 Oct 2 19:14:45.199838 kernel: thunder_bgx, ver 1.0 Oct 2 19:14:45.199854 kernel: nicpf, ver 1.0 Oct 2 19:14:45.199870 kernel: nicvf, ver 1.0 Oct 2 19:14:45.200067 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:14:45.200295 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:14:44 UTC (1696274084) Oct 2 19:14:45.200320 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:14:45.200337 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:14:45.200353 kernel: Segment Routing with IPv6 Oct 2 19:14:45.200369 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:14:45.200385 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:14:45.200401 kernel: Key type dns_resolver registered Oct 2 19:14:45.200417 kernel: registered taskstats version 1 Oct 2 19:14:45.200439 kernel: Loading compiled-in X.509 certificates Oct 2 19:14:45.200455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:14:45.200472 kernel: Key type .fscrypt registered Oct 2 19:14:45.200488 kernel: Key type fscrypt-provisioning registered Oct 2 19:14:45.200503 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:14:45.200520 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:14:45.200536 kernel: ima: No architecture policies found Oct 2 19:14:45.200552 kernel: Freeing unused kernel memory: 34560K Oct 2 19:14:45.200568 kernel: Run /init as init process Oct 2 19:14:45.200587 kernel: with arguments: Oct 2 19:14:45.200604 kernel: /init Oct 2 19:14:45.200620 kernel: with environment: Oct 2 19:14:45.200635 kernel: HOME=/ Oct 2 19:14:45.200651 kernel: TERM=linux Oct 2 19:14:45.200666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:14:45.200687 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:14:45.200708 systemd[1]: Detected virtualization amazon. Oct 2 19:14:45.200730 systemd[1]: Detected architecture arm64. Oct 2 19:14:45.200747 systemd[1]: Running in initrd. Oct 2 19:14:45.200765 systemd[1]: No hostname configured, using default hostname. Oct 2 19:14:45.200782 systemd[1]: Hostname set to . Oct 2 19:14:45.200800 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:14:45.200818 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:14:45.200836 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:14:45.200853 systemd[1]: Reached target cryptsetup.target. Oct 2 19:14:45.200874 systemd[1]: Reached target paths.target. Oct 2 19:14:45.200892 systemd[1]: Reached target slices.target. Oct 2 19:14:45.200909 systemd[1]: Reached target swap.target. Oct 2 19:14:45.200926 systemd[1]: Reached target timers.target. Oct 2 19:14:45.200945 systemd[1]: Listening on iscsid.socket. Oct 2 19:14:45.200962 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:14:45.200980 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:14:45.200998 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:14:45.201019 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:14:45.201037 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:14:45.201054 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:14:45.201072 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:14:45.201089 systemd[1]: Reached target sockets.target. Oct 2 19:14:45.201107 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:14:45.201153 systemd[1]: Finished network-cleanup.service. Oct 2 19:14:45.201174 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:14:45.201192 systemd[1]: Starting systemd-journald.service... Oct 2 19:14:45.201216 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:14:45.201233 systemd[1]: Starting systemd-resolved.service... Oct 2 19:14:45.201251 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:14:45.201269 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:14:45.201287 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:14:45.201305 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:14:45.201322 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:14:45.201339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:14:45.201357 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:14:45.201378 kernel: Bridge firewalling registered Oct 2 19:14:45.201396 kernel: audit: type=1130 audit(1696274085.182:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.201413 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:14:45.201435 systemd-journald[309]: Journal started Oct 2 19:14:45.201526 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2be352b7891ffed8e52cb637b3945d) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:14:45.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.125404 systemd-modules-load[310]: Inserted module 'overlay' Oct 2 19:14:45.193043 systemd-modules-load[310]: Inserted module 'br_netfilter' Oct 2 19:14:45.226159 systemd[1]: Started systemd-journald.service. Oct 2 19:14:45.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.240019 kernel: audit: type=1130 audit(1696274085.227:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.240081 kernel: SCSI subsystem initialized Oct 2 19:14:45.261157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:14:45.261224 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:14:45.264952 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:14:45.266357 systemd-resolved[311]: Positive Trust Anchors: Oct 2 19:14:45.266383 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:14:45.266436 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:14:45.297804 systemd-modules-load[310]: Inserted module 'dm_multipath' Oct 2 19:14:45.301828 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:14:45.338294 kernel: audit: type=1130 audit(1696274085.302:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.338330 kernel: audit: type=1130 audit(1696274085.312:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.304031 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:14:45.314771 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:14:45.327422 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:14:45.375910 dracut-cmdline[329]: dracut-dracut-053 Oct 2 19:14:45.382715 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:14:45.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.394190 kernel: audit: type=1130 audit(1696274085.383:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.400903 dracut-cmdline[329]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:14:45.623142 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:14:45.634146 kernel: iscsi: registered transport (tcp) Oct 2 19:14:45.660985 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:14:45.661055 kernel: QLogic iSCSI HBA Driver Oct 2 19:14:45.872075 systemd-resolved[311]: Defaulting to hostname 'linux'. Oct 2 19:14:45.874025 kernel: random: crng init done Oct 2 19:14:45.875873 systemd[1]: Started systemd-resolved.service. Oct 2 19:14:45.894548 kernel: audit: type=1130 audit(1696274085.878:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.879899 systemd[1]: Reached target nss-lookup.target. Oct 2 19:14:45.940520 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:14:45.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.949617 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:14:45.963157 kernel: audit: type=1130 audit(1696274085.939:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.041157 kernel: raid6: neonx8 gen() 6333 MB/s Oct 2 19:14:46.059148 kernel: raid6: neonx8 xor() 4632 MB/s Oct 2 19:14:46.077149 kernel: raid6: neonx4 gen() 6563 MB/s Oct 2 19:14:46.095149 kernel: raid6: neonx4 xor() 4865 MB/s Oct 2 19:14:46.113156 kernel: raid6: neonx2 gen() 5789 MB/s Oct 2 19:14:46.131152 kernel: raid6: neonx2 xor() 4432 MB/s Oct 2 19:14:46.149153 kernel: raid6: neonx1 gen() 4456 MB/s Oct 2 19:14:46.167152 kernel: raid6: neonx1 xor() 3630 MB/s Oct 2 19:14:46.185151 kernel: raid6: int64x8 gen() 3418 MB/s Oct 2 19:14:46.203152 kernel: raid6: int64x8 xor() 2082 MB/s Oct 2 19:14:46.221152 kernel: raid6: int64x4 gen() 3852 MB/s Oct 2 19:14:46.239150 kernel: raid6: int64x4 xor() 2187 MB/s Oct 2 19:14:46.257152 kernel: raid6: int64x2 gen() 3622 MB/s Oct 2 19:14:46.275152 kernel: raid6: int64x2 xor() 1949 MB/s Oct 2 19:14:46.293151 kernel: raid6: int64x1 gen() 2777 MB/s Oct 2 19:14:46.312815 kernel: raid6: int64x1 xor() 1450 MB/s Oct 2 19:14:46.312845 kernel: raid6: using algorithm neonx4 gen() 6563 MB/s Oct 2 19:14:46.312876 kernel: raid6: .... xor() 4865 MB/s, rmw enabled Oct 2 19:14:46.314697 kernel: raid6: using neon recovery algorithm Oct 2 19:14:46.334166 kernel: xor: measuring software checksum speed Oct 2 19:14:46.336151 kernel: 8regs : 9333 MB/sec Oct 2 19:14:46.339151 kernel: 32regs : 11107 MB/sec Oct 2 19:14:46.343336 kernel: arm64_neon : 9566 MB/sec Oct 2 19:14:46.343372 kernel: xor: using function: 32regs (11107 MB/sec) Oct 2 19:14:46.433171 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:14:46.472411 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:14:46.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.476740 systemd[1]: Starting systemd-udevd.service... Oct 2 19:14:46.487251 kernel: audit: type=1130 audit(1696274086.473:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.487571 kernel: audit: type=1334 audit(1696274086.474:10): prog-id=7 op=LOAD Oct 2 19:14:46.474000 audit: BPF prog-id=7 op=LOAD Oct 2 19:14:46.474000 audit: BPF prog-id=8 op=LOAD Oct 2 19:14:46.520264 systemd-udevd[508]: Using default interface naming scheme 'v252'. Oct 2 19:14:46.531314 systemd[1]: Started systemd-udevd.service. Oct 2 19:14:46.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.538577 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:14:46.600963 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Oct 2 19:14:46.708546 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:14:46.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.712758 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:14:46.830553 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:14:46.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.977217 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:14:46.977277 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:14:46.985064 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:14:46.985398 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:14:46.993152 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:35:0c:0f:32:cb Oct 2 19:14:46.996251 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:14:47.010162 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:14:47.015171 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:14:47.024156 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:14:47.031496 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:14:47.031551 kernel: GPT:9289727 != 16777215 Oct 2 19:14:47.031584 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:14:47.033725 kernel: GPT:9289727 != 16777215 Oct 2 19:14:47.035050 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:14:47.038652 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:47.121207 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (560) Oct 2 19:14:47.189434 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:14:47.269496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:14:47.343814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:14:47.348655 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:14:47.363444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:14:47.368659 systemd[1]: Starting disk-uuid.service... Oct 2 19:14:47.390372 disk-uuid[673]: Primary Header is updated. Oct 2 19:14:47.390372 disk-uuid[673]: Secondary Entries is updated. Oct 2 19:14:47.390372 disk-uuid[673]: Secondary Header is updated. Oct 2 19:14:47.411154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:47.419161 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:48.425476 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:48.425551 disk-uuid[674]: The operation has completed successfully. Oct 2 19:14:48.695081 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:14:48.695335 systemd[1]: Finished disk-uuid.service. Oct 2 19:14:48.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:48.707150 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 2 19:14:48.707198 kernel: audit: type=1130 audit(1696274088.698:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:48.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:48.716788 kernel: audit: type=1131 audit(1696274088.699:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:48.722434 systemd[1]: Starting verity-setup.service... Oct 2 19:14:48.772202 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:14:48.862200 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:14:48.873530 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:14:48.877813 systemd[1]: Finished verity-setup.service. Oct 2 19:14:48.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:48.889180 kernel: audit: type=1130 audit(1696274088.879:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:48.970242 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:14:48.970548 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:14:48.973643 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:14:48.977698 systemd[1]: Starting ignition-setup.service... Oct 2 19:14:48.989274 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:14:49.021780 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:49.021854 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:49.024110 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:49.031149 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:49.063577 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:14:49.120312 systemd[1]: Finished ignition-setup.service. Oct 2 19:14:49.124815 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:14:49.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.154186 kernel: audit: type=1130 audit(1696274089.122:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.347440 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:14:49.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.357000 audit: BPF prog-id=9 op=LOAD Oct 2 19:14:49.361524 kernel: audit: type=1130 audit(1696274089.349:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.361584 kernel: audit: type=1334 audit(1696274089.357:20): prog-id=9 op=LOAD Oct 2 19:14:49.359711 systemd[1]: Starting systemd-networkd.service... Oct 2 19:14:49.420501 systemd-networkd[1186]: lo: Link UP Oct 2 19:14:49.420526 systemd-networkd[1186]: lo: Gained carrier Oct 2 19:14:49.424800 systemd-networkd[1186]: Enumeration completed Oct 2 19:14:49.438500 kernel: audit: type=1130 audit(1696274089.425:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.424995 systemd[1]: Started systemd-networkd.service. Oct 2 19:14:49.426936 systemd[1]: Reached target network.target. Oct 2 19:14:49.429050 systemd-networkd[1186]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:14:49.440502 systemd[1]: Starting iscsiuio.service... Oct 2 19:14:49.444243 systemd-networkd[1186]: eth0: Link UP Oct 2 19:14:49.444253 systemd-networkd[1186]: eth0: Gained carrier Oct 2 19:14:49.463549 systemd[1]: Started iscsiuio.service. Oct 2 19:14:49.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.471654 systemd[1]: Starting iscsid.service... Oct 2 19:14:49.479326 kernel: audit: type=1130 audit(1696274089.467:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.481340 systemd-networkd[1186]: eth0: DHCPv4 address 172.31.24.89/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:14:49.489805 iscsid[1191]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:14:49.489805 iscsid[1191]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:14:49.489805 iscsid[1191]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:14:49.489805 iscsid[1191]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:14:49.510714 iscsid[1191]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:14:49.510714 iscsid[1191]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:14:49.537187 kernel: audit: type=1130 audit(1696274089.511:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.505164 systemd[1]: Started iscsid.service. Oct 2 19:14:49.521990 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:14:49.572421 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:14:49.593389 kernel: audit: type=1130 audit(1696274089.572:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.573541 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:14:49.573642 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:14:49.583903 systemd[1]: Reached target remote-fs.target. Oct 2 19:14:49.586490 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:14:49.625035 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:14:49.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.783231 ignition[1106]: Ignition 2.14.0 Oct 2 19:14:49.783261 ignition[1106]: Stage: fetch-offline Oct 2 19:14:49.783679 ignition[1106]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:49.785076 ignition[1106]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:49.804779 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:49.807823 ignition[1106]: Ignition finished successfully Oct 2 19:14:49.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.810394 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:14:49.818639 systemd[1]: Starting ignition-fetch.service... Oct 2 19:14:49.850175 ignition[1210]: Ignition 2.14.0 Oct 2 19:14:49.850701 ignition[1210]: Stage: fetch Oct 2 19:14:49.851102 ignition[1210]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:49.851195 ignition[1210]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:49.868360 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:49.870727 ignition[1210]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:49.887694 ignition[1210]: INFO : PUT result: OK Oct 2 19:14:49.890996 ignition[1210]: DEBUG : parsed url from cmdline: "" Oct 2 19:14:49.892942 ignition[1210]: INFO : no config URL provided Oct 2 19:14:49.894676 ignition[1210]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:14:49.894676 ignition[1210]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:14:49.894676 ignition[1210]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:49.901731 ignition[1210]: INFO : PUT result: OK Oct 2 19:14:49.901731 ignition[1210]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:14:49.905849 ignition[1210]: INFO : GET result: OK Oct 2 19:14:49.907574 ignition[1210]: DEBUG : parsing config with SHA512: 6cdd1c5c29557d45eaa2a9e5eba876ada592d88b7b40975675be4f3ba3ee168e68cb95571c3c56055b64a911fbd829ef10ab0f799405eea5c5ea53c5d09f11d3 Oct 2 19:14:49.933058 unknown[1210]: fetched base config from "system" Oct 2 19:14:49.933091 unknown[1210]: fetched base config from "system" Oct 2 19:14:49.933107 unknown[1210]: fetched user config from "aws" Oct 2 19:14:49.938980 ignition[1210]: fetch: fetch complete Oct 2 19:14:49.939009 ignition[1210]: fetch: fetch passed Oct 2 19:14:49.940419 ignition[1210]: Ignition finished successfully Oct 2 19:14:49.944345 systemd[1]: Finished ignition-fetch.service. Oct 2 19:14:49.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:49.950057 systemd[1]: Starting ignition-kargs.service... Oct 2 19:14:49.983486 ignition[1216]: Ignition 2.14.0 Oct 2 19:14:49.985868 ignition[1216]: Stage: kargs Oct 2 19:14:49.987506 ignition[1216]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:49.989897 ignition[1216]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:50.001544 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:50.003945 ignition[1216]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:50.007459 ignition[1216]: INFO : PUT result: OK Oct 2 19:14:50.012769 ignition[1216]: kargs: kargs passed Oct 2 19:14:50.012877 ignition[1216]: Ignition finished successfully Oct 2 19:14:50.017338 systemd[1]: Finished ignition-kargs.service. Oct 2 19:14:50.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:50.021777 systemd[1]: Starting ignition-disks.service... Oct 2 19:14:50.051313 ignition[1222]: Ignition 2.14.0 Oct 2 19:14:50.051342 ignition[1222]: Stage: disks Oct 2 19:14:50.051727 ignition[1222]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:50.051873 ignition[1222]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:50.067976 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:50.070558 ignition[1222]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:50.074101 ignition[1222]: INFO : PUT result: OK Oct 2 19:14:50.078717 ignition[1222]: disks: disks passed Oct 2 19:14:50.079005 ignition[1222]: Ignition finished successfully Oct 2 19:14:50.083498 systemd[1]: Finished ignition-disks.service. Oct 2 19:14:50.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:50.086801 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:14:50.091929 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:14:50.095148 systemd[1]: Reached target local-fs.target. Oct 2 19:14:50.098099 systemd[1]: Reached target sysinit.target. Oct 2 19:14:50.101083 systemd[1]: Reached target basic.target. Oct 2 19:14:50.105547 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:14:50.162536 systemd-fsck[1230]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:14:50.170377 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:14:50.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:50.175034 systemd[1]: Mounting sysroot.mount... Oct 2 19:14:50.203171 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:14:50.206269 systemd[1]: Mounted sysroot.mount. Oct 2 19:14:50.207968 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:14:50.220160 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:14:50.222453 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:14:50.222548 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:14:50.222623 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:14:50.241445 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:14:50.256083 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:14:50.261237 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:14:50.294147 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1247) Oct 2 19:14:50.299814 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:50.299858 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:50.299883 initrd-setup-root[1252]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:14:50.304526 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:50.317148 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:50.321920 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:14:50.333658 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:14:50.351378 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:14:50.368934 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:14:50.591166 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:14:50.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:50.594359 systemd[1]: Starting ignition-mount.service... Oct 2 19:14:50.603437 systemd[1]: Starting sysroot-boot.service... Oct 2 19:14:50.632196 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:14:50.632360 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:14:50.662997 systemd[1]: Finished sysroot-boot.service. Oct 2 19:14:50.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:50.676937 ignition[1313]: INFO : Ignition 2.14.0 Oct 2 19:14:50.676937 ignition[1313]: INFO : Stage: mount Oct 2 19:14:50.680307 ignition[1313]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:50.680307 ignition[1313]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:50.698014 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:50.700633 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:50.704157 ignition[1313]: INFO : PUT result: OK Oct 2 19:14:50.708679 ignition[1313]: INFO : mount: mount passed Oct 2 19:14:50.710362 ignition[1313]: INFO : Ignition finished successfully Oct 2 19:14:50.711863 systemd[1]: Finished ignition-mount.service. Oct 2 19:14:50.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:50.720603 systemd[1]: Starting ignition-files.service... Oct 2 19:14:50.744152 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:14:50.768184 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1322) Oct 2 19:14:50.773924 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:50.773975 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:50.776213 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:50.783138 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:50.788556 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:14:50.821659 ignition[1341]: INFO : Ignition 2.14.0 Oct 2 19:14:50.821659 ignition[1341]: INFO : Stage: files Oct 2 19:14:50.825075 ignition[1341]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:50.825075 ignition[1341]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:50.843573 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:50.846153 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:50.849812 ignition[1341]: INFO : PUT result: OK Oct 2 19:14:50.854514 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:14:50.858716 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:14:50.858716 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:14:50.895157 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:14:50.898113 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:14:50.901895 unknown[1341]: wrote ssh authorized keys file for user: core Oct 2 19:14:50.904323 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:14:50.907857 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:14:50.911824 ignition[1341]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Oct 2 19:14:50.932396 systemd-networkd[1186]: eth0: Gained IPv6LL Oct 2 19:14:51.076870 ignition[1341]: INFO : GET result: OK Oct 2 19:14:51.602861 ignition[1341]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Oct 2 19:14:51.607473 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:14:51.607473 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:14:51.607473 ignition[1341]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:14:51.698196 ignition[1341]: INFO : GET result: OK Oct 2 19:14:51.958890 ignition[1341]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Oct 2 19:14:51.966098 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:14:51.966098 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:14:51.966098 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:51.984170 ignition[1341]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2183784523" Oct 2 19:14:51.984170 ignition[1341]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2183784523": device or resource busy Oct 2 19:14:51.984170 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2183784523", trying btrfs: device or resource busy Oct 2 19:14:51.984170 ignition[1341]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2183784523" Oct 2 19:14:52.007254 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1346) Oct 2 19:14:52.007293 ignition[1341]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2183784523" Oct 2 19:14:52.010471 ignition[1341]: INFO : op(3): [started] unmounting "/mnt/oem2183784523" Oct 2 19:14:52.015842 ignition[1341]: INFO : op(3): [finished] unmounting "/mnt/oem2183784523" Oct 2 19:14:52.015842 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:14:52.015842 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:14:52.015842 ignition[1341]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:14:52.012749 systemd[1]: mnt-oem2183784523.mount: Deactivated successfully. Oct 2 19:14:52.093812 ignition[1341]: INFO : GET result: OK Oct 2 19:14:52.980947 ignition[1341]: DEBUG : file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Oct 2 19:14:52.986222 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:14:52.986222 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:14:52.986222 ignition[1341]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:14:53.032175 ignition[1341]: INFO : GET result: OK Oct 2 19:14:54.502531 ignition[1341]: DEBUG : file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Oct 2 19:14:54.507734 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:14:54.511245 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:14:54.515196 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:14:54.518813 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:14:54.522605 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:14:54.522605 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:14:54.531452 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:54.556370 ignition[1341]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1826656712" Oct 2 19:14:54.559417 ignition[1341]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1826656712": device or resource busy Oct 2 19:14:54.562842 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1826656712", trying btrfs: device or resource busy Oct 2 19:14:54.566575 ignition[1341]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1826656712" Oct 2 19:14:54.569711 ignition[1341]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1826656712" Oct 2 19:14:54.590291 ignition[1341]: INFO : op(6): [started] unmounting "/mnt/oem1826656712" Oct 2 19:14:54.592902 systemd[1]: mnt-oem1826656712.mount: Deactivated successfully. Oct 2 19:14:54.597727 ignition[1341]: INFO : op(6): [finished] unmounting "/mnt/oem1826656712" Oct 2 19:14:54.600102 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:14:54.600102 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:14:54.608051 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:54.619482 ignition[1341]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3657493245" Oct 2 19:14:54.622494 ignition[1341]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3657493245": device or resource busy Oct 2 19:14:54.622494 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3657493245", trying btrfs: device or resource busy Oct 2 19:14:54.622494 ignition[1341]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3657493245" Oct 2 19:14:54.622494 ignition[1341]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3657493245" Oct 2 19:14:54.641557 ignition[1341]: INFO : op(9): [started] unmounting "/mnt/oem3657493245" Oct 2 19:14:54.650042 ignition[1341]: INFO : op(9): [finished] unmounting "/mnt/oem3657493245" Oct 2 19:14:54.652629 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:14:54.656273 systemd[1]: mnt-oem3657493245.mount: Deactivated successfully. Oct 2 19:14:54.661938 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:14:54.666342 ignition[1341]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:54.681746 ignition[1341]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3169720341" Oct 2 19:14:54.684839 ignition[1341]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3169720341": device or resource busy Oct 2 19:14:54.684839 ignition[1341]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3169720341", trying btrfs: device or resource busy Oct 2 19:14:54.684839 ignition[1341]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3169720341" Oct 2 19:14:54.684839 ignition[1341]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3169720341" Oct 2 19:14:54.698709 ignition[1341]: INFO : op(c): [started] unmounting "/mnt/oem3169720341" Oct 2 19:14:54.701104 ignition[1341]: INFO : op(c): [finished] unmounting "/mnt/oem3169720341" Oct 2 19:14:54.701104 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:14:54.701104 ignition[1341]: INFO : files: op(d): [started] processing unit "nvidia.service" Oct 2 19:14:54.701104 ignition[1341]: INFO : files: op(d): [finished] processing unit "nvidia.service" Oct 2 19:14:54.701104 ignition[1341]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:14:54.701104 ignition[1341]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:14:54.701104 ignition[1341]: INFO : files: op(f): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(f): op(10): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(f): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:14:54.721921 ignition[1341]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(15): [started] setting preset to enabled for "nvidia.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(15): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:14:54.758598 ignition[1341]: INFO : files: files passed Oct 2 19:14:54.758598 ignition[1341]: INFO : Ignition finished successfully Oct 2 19:14:54.818180 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:14:54.818234 kernel: audit: type=1130 audit(1696274094.800:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.799396 systemd[1]: Finished ignition-files.service. Oct 2 19:14:54.822687 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:14:54.833230 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:14:54.839676 systemd[1]: Starting ignition-quench.service... Oct 2 19:14:54.854110 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:14:54.856572 systemd[1]: Finished ignition-quench.service. Oct 2 19:14:54.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.875536 kernel: audit: type=1130 audit(1696274094.859:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.875592 kernel: audit: type=1131 audit(1696274094.859:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.890235 initrd-setup-root-after-ignition[1367]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:14:54.895637 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:14:54.909918 kernel: audit: type=1130 audit(1696274094.897:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.907856 systemd[1]: Reached target ignition-complete.target. Oct 2 19:14:54.912804 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:14:54.965344 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:14:54.967472 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:14:54.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.985961 kernel: audit: type=1130 audit(1696274094.968:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.986032 kernel: audit: type=1131 audit(1696274094.977:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.978704 systemd[1]: Reached target initrd-fs.target. Oct 2 19:14:54.987886 systemd[1]: Reached target initrd.target. Oct 2 19:14:54.991167 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:14:54.992767 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:14:55.038179 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:14:55.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.043674 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:14:55.051852 kernel: audit: type=1130 audit(1696274095.040:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.074390 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:14:55.078132 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:14:55.082154 systemd[1]: Stopped target timers.target. Oct 2 19:14:55.085615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:14:55.087995 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:14:55.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.092051 systemd[1]: Stopped target initrd.target. Oct 2 19:14:55.102287 kernel: audit: type=1131 audit(1696274095.090:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.102534 systemd[1]: Stopped target basic.target. Oct 2 19:14:55.104487 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:14:55.109445 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:14:55.113504 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:14:55.117453 systemd[1]: Stopped target remote-fs.target. Oct 2 19:14:55.120971 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:14:55.124675 systemd[1]: Stopped target sysinit.target. Oct 2 19:14:55.128145 systemd[1]: Stopped target local-fs.target. Oct 2 19:14:55.131611 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:14:55.135270 systemd[1]: Stopped target swap.target. Oct 2 19:14:55.138434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:14:55.140799 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:14:55.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.144651 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:14:55.153475 kernel: audit: type=1131 audit(1696274095.143:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.155505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:14:55.159505 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:14:55.173753 kernel: audit: type=1131 audit(1696274095.158:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.161638 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:14:55.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.161882 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:14:55.171702 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:14:55.173478 systemd[1]: Stopped ignition-files.service. Oct 2 19:14:55.179363 systemd[1]: Stopping ignition-mount.service... Oct 2 19:14:55.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.199889 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:14:55.213733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:14:55.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.214729 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:14:55.217430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:14:55.247185 ignition[1380]: INFO : Ignition 2.14.0 Oct 2 19:14:55.247185 ignition[1380]: INFO : Stage: umount Oct 2 19:14:55.247185 ignition[1380]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:55.247185 ignition[1380]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:55.217657 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:14:55.279458 ignition[1380]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:55.279458 ignition[1380]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:55.231574 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:14:55.287858 ignition[1380]: INFO : PUT result: OK Oct 2 19:14:55.231798 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:14:55.301802 ignition[1380]: INFO : umount: umount passed Oct 2 19:14:55.305604 ignition[1380]: INFO : Ignition finished successfully Oct 2 19:14:55.308839 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:14:55.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.309086 systemd[1]: Stopped ignition-mount.service. Oct 2 19:14:55.311727 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:14:55.311836 systemd[1]: Stopped ignition-disks.service. Oct 2 19:14:55.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.320768 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:14:55.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.320873 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:14:55.322704 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:14:55.322804 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:14:55.325054 systemd[1]: Stopped target network.target. Oct 2 19:14:55.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.337399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:14:55.337525 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:14:55.341887 systemd[1]: Stopped target paths.target. Oct 2 19:14:55.350316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:14:55.355458 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:14:55.363465 systemd[1]: Stopped target slices.target. Oct 2 19:14:55.365649 systemd[1]: Stopped target sockets.target. Oct 2 19:14:55.370448 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:14:55.370519 systemd[1]: Closed iscsid.socket. Oct 2 19:14:55.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.372985 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:14:55.373057 systemd[1]: Closed iscsiuio.socket. Oct 2 19:14:55.377741 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:14:55.377856 systemd[1]: Stopped ignition-setup.service. Oct 2 19:14:55.382384 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:14:55.385764 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:14:55.400016 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:14:55.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.408000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:14:55.400293 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:14:55.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.401578 systemd-networkd[1186]: eth0: DHCPv6 lease lost Oct 2 19:14:55.407053 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:14:55.407694 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:14:55.418621 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:14:55.418725 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:14:55.426000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:14:55.425645 systemd[1]: Stopping network-cleanup.service... Oct 2 19:14:55.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.427498 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:14:55.427642 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:14:55.434358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:14:55.434466 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:14:55.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.446566 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:14:55.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.446678 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:14:55.449613 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:14:55.460165 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:14:55.460669 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:14:55.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.467834 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:14:55.467950 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:14:55.473307 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:14:55.473407 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:14:55.502393 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:14:55.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.502513 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:14:55.506921 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:14:55.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.507035 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:14:55.515728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:14:55.515835 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:14:55.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.529577 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:14:55.535025 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:14:55.535366 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:14:55.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.545863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:14:55.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.545982 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:14:55.553861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:14:55.553985 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:14:55.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.561303 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:14:55.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.561557 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:14:55.565323 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:14:55.565545 systemd[1]: Stopped network-cleanup.service. Oct 2 19:14:55.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.575657 systemd[1]: mnt-oem3169720341.mount: Deactivated successfully. Oct 2 19:14:55.576853 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:14:55.579116 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:14:55.581929 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:14:55.583663 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:14:55.583768 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:14:55.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.598291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:14:55.598522 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:14:55.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.605566 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:14:55.610734 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:14:55.637594 systemd[1]: Switching root. Oct 2 19:14:55.667177 iscsid[1191]: iscsid shutting down. Oct 2 19:14:55.670313 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Oct 2 19:14:55.670403 systemd-journald[309]: Journal stopped Oct 2 19:15:01.535738 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:15:01.536304 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:15:01.536354 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:15:01.536392 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:15:01.536426 kernel: SELinux: policy capability open_perms=1 Oct 2 19:15:01.536459 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:15:01.536491 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:15:01.536523 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:15:01.536556 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:15:01.536587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:15:01.536618 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:15:01.536659 systemd[1]: Successfully loaded SELinux policy in 86.584ms. Oct 2 19:15:01.536939 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.554ms. Oct 2 19:15:01.536982 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:15:01.537017 systemd[1]: Detected virtualization amazon. Oct 2 19:15:01.537050 systemd[1]: Detected architecture arm64. Oct 2 19:15:01.537082 systemd[1]: Detected first boot. Oct 2 19:15:01.540275 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:15:01.540372 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:15:01.540415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:01.540527 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:01.540575 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:01.540675 kernel: kauditd_printk_skb: 38 callbacks suppressed Oct 2 19:15:01.540718 kernel: audit: type=1334 audit(1696274100.991:82): prog-id=12 op=LOAD Oct 2 19:15:01.540758 kernel: audit: type=1334 audit(1696274100.993:83): prog-id=3 op=UNLOAD Oct 2 19:15:01.540789 kernel: audit: type=1334 audit(1696274100.995:84): prog-id=13 op=LOAD Oct 2 19:15:01.540822 kernel: audit: type=1334 audit(1696274100.995:85): prog-id=14 op=LOAD Oct 2 19:15:01.540856 kernel: audit: type=1334 audit(1696274100.995:86): prog-id=4 op=UNLOAD Oct 2 19:15:01.540888 kernel: audit: type=1334 audit(1696274100.995:87): prog-id=5 op=UNLOAD Oct 2 19:15:01.540923 kernel: audit: type=1131 audit(1696274100.998:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.540956 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:15:01.540992 kernel: audit: type=1334 audit(1696274101.003:89): prog-id=12 op=UNLOAD Oct 2 19:15:01.541023 systemd[1]: Stopped iscsiuio.service. Oct 2 19:15:01.541055 kernel: audit: type=1131 audit(1696274101.025:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.541091 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:15:01.541155 systemd[1]: Stopped iscsid.service. Oct 2 19:15:01.541194 kernel: audit: type=1131 audit(1696274101.039:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.541224 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:15:01.541258 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:15:01.541288 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:15:01.541321 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:15:01.541353 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:15:01.541398 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:15:01.541432 systemd[1]: Created slice system-getty.slice. Oct 2 19:15:01.541463 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:15:01.541503 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:15:01.541539 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:15:01.541572 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:15:01.541603 systemd[1]: Created slice user.slice. Oct 2 19:15:01.541636 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:15:01.541667 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:15:01.541701 systemd[1]: Set up automount boot.automount. Oct 2 19:15:01.541732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:15:01.541762 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:15:01.541791 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:15:01.541823 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:15:01.541926 systemd[1]: Reached target integritysetup.target. Oct 2 19:15:01.541965 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:15:01.541997 systemd[1]: Reached target remote-fs.target. Oct 2 19:15:01.542027 systemd[1]: Reached target slices.target. Oct 2 19:15:01.542062 systemd[1]: Reached target swap.target. Oct 2 19:15:01.542093 systemd[1]: Reached target torcx.target. Oct 2 19:15:01.542148 systemd[1]: Reached target veritysetup.target. Oct 2 19:15:01.542185 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:15:01.542223 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:15:01.542254 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:15:01.542293 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:15:01.542324 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:15:01.542354 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:15:01.542391 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:15:01.542424 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:15:01.542453 systemd[1]: Mounting media.mount... Oct 2 19:15:01.542485 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:15:01.542534 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:15:01.542575 systemd[1]: Mounting tmp.mount... Oct 2 19:15:01.542606 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:15:01.542636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:15:01.542666 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:15:01.542696 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:15:01.542733 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:15:01.542763 systemd[1]: Starting modprobe@drm.service... Oct 2 19:15:01.542794 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:15:01.542825 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:15:01.542856 systemd[1]: Starting modprobe@loop.service... Oct 2 19:15:01.542888 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:15:01.542919 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:15:01.542949 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:15:01.542988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:15:01.543019 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:15:01.543049 systemd[1]: Stopped systemd-journald.service. Oct 2 19:15:01.543078 kernel: fuse: init (API version 7.34) Oct 2 19:15:01.543107 systemd[1]: Starting systemd-journald.service... Oct 2 19:15:01.547276 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:15:01.547321 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:15:01.547353 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:15:01.547384 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:15:01.547418 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:15:01.547455 systemd[1]: Stopped verity-setup.service. Oct 2 19:15:01.547485 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:15:01.547518 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:15:01.547547 systemd[1]: Mounted media.mount. Oct 2 19:15:01.547648 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:15:01.547682 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:15:01.547712 systemd[1]: Mounted tmp.mount. Oct 2 19:15:01.547742 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:15:01.547777 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:15:01.547810 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:15:01.547841 kernel: loop: module loaded Oct 2 19:15:01.547877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:15:01.547908 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:15:01.547938 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:15:01.547973 systemd[1]: Finished modprobe@drm.service. Oct 2 19:15:01.548003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:15:01.548034 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:15:01.548067 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:15:01.548097 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:15:01.548149 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:15:01.548183 systemd[1]: Finished modprobe@loop.service. Oct 2 19:15:01.548213 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:15:01.548248 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:15:01.548280 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:15:01.548313 systemd[1]: Reached target network-pre.target. Oct 2 19:15:01.548343 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:15:01.548373 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:15:01.548407 systemd-journald[1487]: Journal started Oct 2 19:15:01.548513 systemd-journald[1487]: Runtime Journal (/run/log/journal/ec2be352b7891ffed8e52cb637b3945d) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:14:56.413000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:14:56.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:14:56.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:14:56.587000 audit: BPF prog-id=10 op=LOAD Oct 2 19:14:56.587000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:14:56.587000 audit: BPF prog-id=11 op=LOAD Oct 2 19:14:56.587000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:15:00.991000 audit: BPF prog-id=12 op=LOAD Oct 2 19:15:00.993000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:15:00.995000 audit: BPF prog-id=13 op=LOAD Oct 2 19:15:00.995000 audit: BPF prog-id=14 op=LOAD Oct 2 19:15:00.995000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:15:00.995000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:15:00.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.003000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:15:01.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.351000 audit: BPF prog-id=15 op=LOAD Oct 2 19:15:01.352000 audit: BPF prog-id=16 op=LOAD Oct 2 19:15:01.352000 audit: BPF prog-id=17 op=LOAD Oct 2 19:15:01.352000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:15:01.352000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:15:01.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.519000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:15:01.519000 audit[1487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffedfb5e70 a2=4000 a3=1 items=0 ppid=1 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:01.519000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:14:56.810073 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:00.987243 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:14:56.819868 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:15:01.000400 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:14:56.819923 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:14:56.819997 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:14:56.820025 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:14:56.820099 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:14:56.820161 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:14:56.820634 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:14:56.820736 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:14:56.820773 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:14:56.821840 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:15:01.564493 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:15:01.564568 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:15:01.564608 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:14:56.821925 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:14:56.821973 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:14:56.822012 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:14:56.822061 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:14:56.822099 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:14:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:15:00.047260 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:15:00Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:15:00.047831 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:15:00Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:15:00.048093 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:15:00Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:15:00.048599 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:15:00Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:15:00.048723 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:15:00Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:15:00.048878 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2023-10-02T19:15:00Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:15:01.590077 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:15:01.590204 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:15:01.590255 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:15:01.596315 systemd[1]: Started systemd-journald.service. Oct 2 19:15:01.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.598567 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:15:01.600803 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:15:01.605741 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:15:01.642013 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:15:01.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.644466 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:15:01.650323 systemd-journald[1487]: Time spent on flushing to /var/log/journal/ec2be352b7891ffed8e52cb637b3945d is 67.637ms for 1138 entries. Oct 2 19:15:01.650323 systemd-journald[1487]: System Journal (/var/log/journal/ec2be352b7891ffed8e52cb637b3945d) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:15:01.771217 systemd-journald[1487]: Received client request to flush runtime journal. Oct 2 19:15:01.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.701435 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:15:01.774351 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:15:01.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.816932 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:15:01.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.821455 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:15:01.829545 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:15:01.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.834022 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:15:01.859816 udevadm[1529]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:15:01.971411 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:15:01.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:01.975795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:15:02.128458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:15:02.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.644000 audit: BPF prog-id=18 op=LOAD Oct 2 19:15:02.645000 audit: BPF prog-id=19 op=LOAD Oct 2 19:15:02.645000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:15:02.645000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:15:02.642792 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:15:02.647365 systemd[1]: Starting systemd-udevd.service... Oct 2 19:15:02.698435 systemd-udevd[1534]: Using default interface naming scheme 'v252'. Oct 2 19:15:02.734295 systemd[1]: Started systemd-udevd.service. Oct 2 19:15:02.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.736000 audit: BPF prog-id=20 op=LOAD Oct 2 19:15:02.739543 systemd[1]: Starting systemd-networkd.service... Oct 2 19:15:02.763000 audit: BPF prog-id=21 op=LOAD Oct 2 19:15:02.763000 audit: BPF prog-id=22 op=LOAD Oct 2 19:15:02.763000 audit: BPF prog-id=23 op=LOAD Oct 2 19:15:02.768272 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:15:02.895293 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:15:02.954948 systemd[1]: Started systemd-userdbd.service. Oct 2 19:15:02.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:02.987546 (udev-worker)[1540]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:15:03.182737 systemd-networkd[1539]: lo: Link UP Oct 2 19:15:03.183384 systemd-networkd[1539]: lo: Gained carrier Oct 2 19:15:03.192504 systemd-networkd[1539]: Enumeration completed Oct 2 19:15:03.192863 systemd[1]: Started systemd-networkd.service. Oct 2 19:15:03.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.197788 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:15:03.203843 systemd-networkd[1539]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:15:03.221170 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:15:03.221622 systemd-networkd[1539]: eth0: Link UP Oct 2 19:15:03.222644 systemd-networkd[1539]: eth0: Gained carrier Oct 2 19:15:03.242195 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1544) Oct 2 19:15:03.263512 systemd-networkd[1539]: eth0: DHCPv4 address 172.31.24.89/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:15:03.460668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:15:03.463853 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:15:03.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.469246 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:15:03.495156 lvm[1653]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:15:03.537294 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:15:03.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.539795 systemd[1]: Reached target cryptsetup.target. Oct 2 19:15:03.544951 systemd[1]: Starting lvm2-activation.service... Oct 2 19:15:03.559701 lvm[1654]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:15:03.598418 systemd[1]: Finished lvm2-activation.service. Oct 2 19:15:03.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.600547 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:15:03.602464 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:15:03.602552 systemd[1]: Reached target local-fs.target. Oct 2 19:15:03.604341 systemd[1]: Reached target machines.target. Oct 2 19:15:03.608815 systemd[1]: Starting ldconfig.service... Oct 2 19:15:03.614070 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:15:03.614268 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:15:03.618439 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:15:03.624038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:15:03.631608 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:15:03.634817 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:15:03.634969 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:15:03.640457 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:15:03.668809 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1656 (bootctl) Oct 2 19:15:03.673721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:15:03.703536 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:15:03.706041 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:15:03.709221 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:15:03.737505 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:15:03.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.837024 systemd-fsck[1665]: fsck.fat 4.2 (2021-01-31) Oct 2 19:15:03.837024 systemd-fsck[1665]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:15:03.843290 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:15:03.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:03.848423 systemd[1]: Mounting boot.mount... Oct 2 19:15:03.890598 systemd[1]: Mounted boot.mount. Oct 2 19:15:03.923973 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:15:03.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.134325 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:15:04.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.139054 systemd[1]: Starting audit-rules.service... Oct 2 19:15:04.145807 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:15:04.151290 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:15:04.154000 audit: BPF prog-id=24 op=LOAD Oct 2 19:15:04.159598 systemd[1]: Starting systemd-resolved.service... Oct 2 19:15:04.163000 audit: BPF prog-id=25 op=LOAD Oct 2 19:15:04.169550 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:15:04.173939 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:15:04.203389 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:15:04.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.205682 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:15:04.245659 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:15:04.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.247000 audit[1685]: SYSTEM_BOOT pid=1685 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.276055 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:15:04.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.426150 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:15:04.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.428231 systemd[1]: Reached target time-set.target. Oct 2 19:15:04.473702 systemd-timesyncd[1684]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Oct 2 19:15:04.473956 systemd-timesyncd[1684]: Initial clock synchronization to Mon 2023-10-02 19:15:04.404136 UTC. Oct 2 19:15:04.486350 systemd-resolved[1683]: Positive Trust Anchors: Oct 2 19:15:04.487288 systemd-resolved[1683]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:15:04.487822 systemd-resolved[1683]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:15:04.736564 systemd-resolved[1683]: Defaulting to hostname 'linux'. Oct 2 19:15:04.740060 systemd[1]: Started systemd-resolved.service. Oct 2 19:15:04.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:04.742006 systemd[1]: Reached target network.target. Oct 2 19:15:04.743767 systemd[1]: Reached target nss-lookup.target. Oct 2 19:15:04.755740 augenrules[1707]: No rules Oct 2 19:15:04.756291 systemd-networkd[1539]: eth0: Gained IPv6LL Oct 2 19:15:04.754000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:15:04.754000 audit[1707]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffff561d50 a2=420 a3=0 items=0 ppid=1679 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:04.754000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:15:04.759545 systemd[1]: Finished audit-rules.service. Oct 2 19:15:04.762488 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:15:04.764936 systemd[1]: Reached target network-online.target. Oct 2 19:15:04.910936 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:15:04.912664 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:15:05.202900 ldconfig[1655]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:15:05.208600 systemd[1]: Finished ldconfig.service. Oct 2 19:15:05.212626 systemd[1]: Starting systemd-update-done.service... Oct 2 19:15:05.234577 systemd[1]: Finished systemd-update-done.service. Oct 2 19:15:05.236663 systemd[1]: Reached target sysinit.target. Oct 2 19:15:05.238907 systemd[1]: Started motdgen.path. Oct 2 19:15:05.240871 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:15:05.243589 systemd[1]: Started logrotate.timer. Oct 2 19:15:05.245382 systemd[1]: Started mdadm.timer. Oct 2 19:15:05.246858 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:15:05.248632 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:15:05.248689 systemd[1]: Reached target paths.target. Oct 2 19:15:05.250244 systemd[1]: Reached target timers.target. Oct 2 19:15:05.252302 systemd[1]: Listening on dbus.socket. Oct 2 19:15:05.255759 systemd[1]: Starting docker.socket... Oct 2 19:15:05.267036 systemd[1]: Listening on sshd.socket. Oct 2 19:15:05.269320 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:15:05.270181 systemd[1]: Listening on docker.socket. Oct 2 19:15:05.271953 systemd[1]: Reached target sockets.target. Oct 2 19:15:05.273691 systemd[1]: Reached target basic.target. Oct 2 19:15:05.275447 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:15:05.275636 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:15:05.277796 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:15:05.286878 systemd[1]: Starting containerd.service... Oct 2 19:15:05.290822 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:15:05.298914 systemd[1]: Starting dbus.service... Oct 2 19:15:05.309935 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:15:05.315591 systemd[1]: Starting extend-filesystems.service... Oct 2 19:15:05.317350 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:15:05.319687 systemd[1]: Starting motdgen.service... Oct 2 19:15:05.324089 systemd[1]: Started nvidia.service. Oct 2 19:15:05.338339 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:15:05.342376 systemd[1]: Starting prepare-critools.service... Oct 2 19:15:05.346465 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:15:05.352020 systemd[1]: Starting sshd-keygen.service... Oct 2 19:15:05.358690 jq[1719]: false Oct 2 19:15:05.362621 systemd[1]: Starting systemd-logind.service... Oct 2 19:15:05.364585 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:15:05.364714 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:15:05.372470 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:15:05.379383 systemd[1]: Starting update-engine.service... Oct 2 19:15:05.391451 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:15:05.438306 systemd[1]: Created slice system-sshd.slice. Oct 2 19:15:05.461776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:15:05.462143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:15:05.508331 jq[1729]: true Oct 2 19:15:05.555008 dbus-daemon[1718]: [system] SELinux support is enabled Oct 2 19:15:05.555828 systemd[1]: Started dbus.service. Oct 2 19:15:05.571062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:15:05.571152 systemd[1]: Reached target system-config.target. Oct 2 19:15:05.573403 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:15:05.573443 systemd[1]: Reached target user-config.target. Oct 2 19:15:05.594432 tar[1732]: ./ Oct 2 19:15:05.594432 tar[1732]: ./loopback Oct 2 19:15:05.585868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:15:05.586224 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:15:05.614362 dbus-daemon[1718]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1539 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:15:05.622368 dbus-daemon[1718]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:15:05.632254 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:15:05.651223 tar[1742]: crictl Oct 2 19:15:05.653485 jq[1738]: true Oct 2 19:15:05.707274 extend-filesystems[1720]: Found nvme0n1 Oct 2 19:15:05.713400 extend-filesystems[1720]: Found nvme0n1p1 Oct 2 19:15:05.718421 extend-filesystems[1720]: Found nvme0n1p2 Oct 2 19:15:05.720268 extend-filesystems[1720]: Found nvme0n1p3 Oct 2 19:15:05.729493 extend-filesystems[1720]: Found usr Oct 2 19:15:05.731147 extend-filesystems[1720]: Found nvme0n1p4 Oct 2 19:15:05.732802 extend-filesystems[1720]: Found nvme0n1p6 Oct 2 19:15:05.734815 extend-filesystems[1720]: Found nvme0n1p7 Oct 2 19:15:05.737970 extend-filesystems[1720]: Found nvme0n1p9 Oct 2 19:15:05.746714 extend-filesystems[1720]: Checking size of /dev/nvme0n1p9 Oct 2 19:15:05.772160 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:15:05.772516 systemd[1]: Finished motdgen.service. Oct 2 19:15:05.792862 amazon-ssm-agent[1715]: 2023/10/02 19:15:05 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:15:05.811491 amazon-ssm-agent[1715]: Initializing new seelog logger Oct 2 19:15:05.833728 amazon-ssm-agent[1715]: New Seelog Logger Creation Complete Oct 2 19:15:05.837969 amazon-ssm-agent[1715]: 2023/10/02 19:15:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:15:05.839092 amazon-ssm-agent[1715]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:15:05.839834 amazon-ssm-agent[1715]: 2023/10/02 19:15:05 processing appconfig overrides Oct 2 19:15:05.856587 update_engine[1728]: I1002 19:15:05.856078 1728 main.cc:92] Flatcar Update Engine starting Oct 2 19:15:05.877160 systemd[1]: Started update-engine.service. Oct 2 19:15:05.881982 systemd[1]: Started locksmithd.service. Oct 2 19:15:05.883869 extend-filesystems[1720]: Resized partition /dev/nvme0n1p9 Oct 2 19:15:05.886039 update_engine[1728]: I1002 19:15:05.885987 1728 update_check_scheduler.cc:74] Next update check in 2m7s Oct 2 19:15:05.913103 extend-filesystems[1783]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:15:05.949207 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:15:06.006617 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:15:06.032450 tar[1732]: ./bandwidth Oct 2 19:15:06.036437 extend-filesystems[1783]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:15:06.036437 extend-filesystems[1783]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:15:06.036437 extend-filesystems[1783]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:15:06.045383 extend-filesystems[1720]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:15:06.047953 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:15:06.048469 systemd[1]: Finished extend-filesystems.service. Oct 2 19:15:06.065679 systemd-logind[1727]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:15:06.071380 systemd-logind[1727]: New seat seat0. Oct 2 19:15:06.076660 systemd[1]: Started systemd-logind.service. Oct 2 19:15:06.090477 bash[1794]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:15:06.091989 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:15:06.153102 dbus-daemon[1718]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:15:06.153372 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:15:06.156370 dbus-daemon[1718]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1751 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:15:06.160993 systemd[1]: Starting polkit.service... Oct 2 19:15:06.179229 env[1736]: time="2023-10-02T19:15:06.179039952Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:15:06.180868 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:15:06.221317 polkitd[1804]: Started polkitd version 121 Oct 2 19:15:06.244272 polkitd[1804]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:15:06.244570 polkitd[1804]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:15:06.252402 polkitd[1804]: Finished loading, compiling and executing 2 rules Oct 2 19:15:06.255445 dbus-daemon[1718]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:15:06.255714 systemd[1]: Started polkit.service. Oct 2 19:15:06.258087 polkitd[1804]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:15:06.299660 systemd-hostnamed[1751]: Hostname set to (transient) Oct 2 19:15:06.299758 systemd-resolved[1683]: System hostname changed to 'ip-172-31-24-89'. Oct 2 19:15:06.387036 env[1736]: time="2023-10-02T19:15:06.386950018Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:15:06.396745 env[1736]: time="2023-10-02T19:15:06.396665832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:06.407239 env[1736]: time="2023-10-02T19:15:06.407092421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:15:06.407239 env[1736]: time="2023-10-02T19:15:06.407228600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:06.407662 env[1736]: time="2023-10-02T19:15:06.407605590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:15:06.407773 env[1736]: time="2023-10-02T19:15:06.407656489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:06.407773 env[1736]: time="2023-10-02T19:15:06.407690358Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:15:06.407773 env[1736]: time="2023-10-02T19:15:06.407718558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:06.407917 env[1736]: time="2023-10-02T19:15:06.407899384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:06.408993 env[1736]: time="2023-10-02T19:15:06.408932748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:15:06.409312 env[1736]: time="2023-10-02T19:15:06.409256576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:15:06.409416 env[1736]: time="2023-10-02T19:15:06.409316287Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:15:06.409474 env[1736]: time="2023-10-02T19:15:06.409452157Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:15:06.409532 env[1736]: time="2023-10-02T19:15:06.409480166Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:15:06.422220 tar[1732]: ./ptp Oct 2 19:15:06.426264 env[1736]: time="2023-10-02T19:15:06.426191302Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:15:06.426419 env[1736]: time="2023-10-02T19:15:06.426271413Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:15:06.426419 env[1736]: time="2023-10-02T19:15:06.426308057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:15:06.426419 env[1736]: time="2023-10-02T19:15:06.426379034Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.426594 env[1736]: time="2023-10-02T19:15:06.426419203Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.426594 env[1736]: time="2023-10-02T19:15:06.426453049Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.426594 env[1736]: time="2023-10-02T19:15:06.426484953Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.427210 env[1736]: time="2023-10-02T19:15:06.427001992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.427210 env[1736]: time="2023-10-02T19:15:06.427071338Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.427210 env[1736]: time="2023-10-02T19:15:06.427141470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.427210 env[1736]: time="2023-10-02T19:15:06.427182043Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.427473 env[1736]: time="2023-10-02T19:15:06.427240588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:15:06.427564 env[1736]: time="2023-10-02T19:15:06.427490795Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:15:06.427769 env[1736]: time="2023-10-02T19:15:06.427703763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:15:06.428935 env[1736]: time="2023-10-02T19:15:06.428851798Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:15:06.429087 env[1736]: time="2023-10-02T19:15:06.428946320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429087 env[1736]: time="2023-10-02T19:15:06.428983583Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:15:06.429232 env[1736]: time="2023-10-02T19:15:06.429172090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429232 env[1736]: time="2023-10-02T19:15:06.429215164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429390 env[1736]: time="2023-10-02T19:15:06.429248271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429390 env[1736]: time="2023-10-02T19:15:06.429285868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429390 env[1736]: time="2023-10-02T19:15:06.429318498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429390 env[1736]: time="2023-10-02T19:15:06.429350414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429390 env[1736]: time="2023-10-02T19:15:06.429381913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429643 env[1736]: time="2023-10-02T19:15:06.429412269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429643 env[1736]: time="2023-10-02T19:15:06.429449366Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:15:06.429841 env[1736]: time="2023-10-02T19:15:06.429759606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429841 env[1736]: time="2023-10-02T19:15:06.429828940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429970 env[1736]: time="2023-10-02T19:15:06.429863107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.429970 env[1736]: time="2023-10-02T19:15:06.429892855Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:15:06.429970 env[1736]: time="2023-10-02T19:15:06.429927772Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:15:06.429970 env[1736]: time="2023-10-02T19:15:06.429954544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:15:06.430190 env[1736]: time="2023-10-02T19:15:06.429989782Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:15:06.430190 env[1736]: time="2023-10-02T19:15:06.430062224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:15:06.441788 env[1736]: time="2023-10-02T19:15:06.441634860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:15:06.441788 env[1736]: time="2023-10-02T19:15:06.441772515Z" level=info msg="Connect containerd service" Oct 2 19:15:06.443208 env[1736]: time="2023-10-02T19:15:06.441851007Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:15:06.447910 env[1736]: time="2023-10-02T19:15:06.447833828Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:15:06.448468 env[1736]: time="2023-10-02T19:15:06.448416343Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:15:06.448595 env[1736]: time="2023-10-02T19:15:06.448533730Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:15:06.448749 systemd[1]: Started containerd.service. Oct 2 19:15:06.450331 env[1736]: time="2023-10-02T19:15:06.450013084Z" level=info msg="containerd successfully booted in 0.306529s" Oct 2 19:15:06.465147 env[1736]: time="2023-10-02T19:15:06.463091011Z" level=info msg="Start subscribing containerd event" Oct 2 19:15:06.465147 env[1736]: time="2023-10-02T19:15:06.463381994Z" level=info msg="Start recovering state" Oct 2 19:15:06.465147 env[1736]: time="2023-10-02T19:15:06.463666273Z" level=info msg="Start event monitor" Oct 2 19:15:06.465147 env[1736]: time="2023-10-02T19:15:06.463703774Z" level=info msg="Start snapshots syncer" Oct 2 19:15:06.465147 env[1736]: time="2023-10-02T19:15:06.463731046Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:15:06.465147 env[1736]: time="2023-10-02T19:15:06.463931414Z" level=info msg="Start streaming server" Oct 2 19:15:06.610897 coreos-metadata[1717]: Oct 02 19:15:06.609 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:15:06.613537 tar[1732]: ./vlan Oct 2 19:15:06.614859 coreos-metadata[1717]: Oct 02 19:15:06.611 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:15:06.617767 coreos-metadata[1717]: Oct 02 19:15:06.617 INFO Fetch successful Oct 2 19:15:06.618431 coreos-metadata[1717]: Oct 02 19:15:06.618 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:15:06.619395 coreos-metadata[1717]: Oct 02 19:15:06.618 INFO Fetch successful Oct 2 19:15:06.627105 unknown[1717]: wrote ssh authorized keys file for user: core Oct 2 19:15:06.671297 update-ssh-keys[1849]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:15:06.672874 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:15:06.809430 tar[1732]: ./host-device Oct 2 19:15:06.845019 amazon-ssm-agent[1715]: 2023-10-02 19:15:06 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-00a3a66153216e1a6 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-00a3a66153216e1a6 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:15:06.845019 amazon-ssm-agent[1715]: status code: 400, request id: 2d2a3acd-4b11-4c41-8e66-e7cc3c42b74d Oct 2 19:15:06.845019 amazon-ssm-agent[1715]: 2023-10-02 19:15:06 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:15:06.936556 tar[1732]: ./tuning Oct 2 19:15:07.035885 tar[1732]: ./vrf Oct 2 19:15:07.145943 tar[1732]: ./sbr Oct 2 19:15:07.251144 tar[1732]: ./tap Oct 2 19:15:07.380948 tar[1732]: ./dhcp Oct 2 19:15:07.653507 tar[1732]: ./static Oct 2 19:15:07.700051 tar[1732]: ./firewall Oct 2 19:15:07.729502 systemd[1]: Finished prepare-critools.service. Oct 2 19:15:07.789610 tar[1732]: ./macvlan Oct 2 19:15:07.853033 tar[1732]: ./dummy Oct 2 19:15:07.916532 tar[1732]: ./bridge Oct 2 19:15:07.984897 tar[1732]: ./ipvlan Oct 2 19:15:08.046745 tar[1732]: ./portmap Oct 2 19:15:08.107293 tar[1732]: ./host-local Oct 2 19:15:08.188530 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:15:08.317907 locksmithd[1782]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:15:08.798785 sshd_keygen[1766]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:15:08.857264 systemd[1]: Finished sshd-keygen.service. Oct 2 19:15:08.862418 systemd[1]: Starting issuegen.service... Oct 2 19:15:08.867693 systemd[1]: Started sshd@0-172.31.24.89:22-139.178.89.65:50930.service. Oct 2 19:15:08.887240 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:15:08.887584 systemd[1]: Finished issuegen.service. Oct 2 19:15:08.892062 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:15:08.916343 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:15:08.920959 systemd[1]: Started getty@tty1.service. Oct 2 19:15:08.925528 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:15:08.927781 systemd[1]: Reached target getty.target. Oct 2 19:15:08.929571 systemd[1]: Reached target multi-user.target. Oct 2 19:15:08.933997 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:15:08.958274 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:15:08.958639 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:15:08.961503 systemd[1]: Startup finished in 1.187s (kernel) + 11.692s (initrd) + 12.668s (userspace) = 25.548s. Oct 2 19:15:09.179900 sshd[1920]: Accepted publickey for core from 139.178.89.65 port 50930 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:09.185328 sshd[1920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:09.202951 systemd[1]: Created slice user-500.slice. Oct 2 19:15:09.205375 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:15:09.214247 systemd-logind[1727]: New session 1 of user core. Oct 2 19:15:09.233344 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:15:09.237853 systemd[1]: Starting user@500.service... Oct 2 19:15:09.248415 (systemd)[1929]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:09.499886 systemd[1929]: Queued start job for default target default.target. Oct 2 19:15:09.500951 systemd[1929]: Reached target paths.target. Oct 2 19:15:09.501002 systemd[1929]: Reached target sockets.target. Oct 2 19:15:09.501035 systemd[1929]: Reached target timers.target. Oct 2 19:15:09.501064 systemd[1929]: Reached target basic.target. Oct 2 19:15:09.501179 systemd[1929]: Reached target default.target. Oct 2 19:15:09.501245 systemd[1929]: Startup finished in 235ms. Oct 2 19:15:09.502111 systemd[1]: Started user@500.service. Oct 2 19:15:09.504091 systemd[1]: Started session-1.scope. Oct 2 19:15:09.657440 systemd[1]: Started sshd@1-172.31.24.89:22-139.178.89.65:43278.service. Oct 2 19:15:09.843930 sshd[1938]: Accepted publickey for core from 139.178.89.65 port 43278 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:09.847214 sshd[1938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:09.854890 systemd-logind[1727]: New session 2 of user core. Oct 2 19:15:09.856726 systemd[1]: Started session-2.scope. Oct 2 19:15:10.016367 sshd[1938]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:10.022318 systemd-logind[1727]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:15:10.022897 systemd[1]: sshd@1-172.31.24.89:22-139.178.89.65:43278.service: Deactivated successfully. Oct 2 19:15:10.024146 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:15:10.025750 systemd-logind[1727]: Removed session 2. Oct 2 19:15:10.048481 systemd[1]: Started sshd@2-172.31.24.89:22-139.178.89.65:43286.service. Oct 2 19:15:10.228833 sshd[1944]: Accepted publickey for core from 139.178.89.65 port 43286 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:10.232523 sshd[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:10.240478 systemd-logind[1727]: New session 3 of user core. Oct 2 19:15:10.241368 systemd[1]: Started session-3.scope. Oct 2 19:15:10.372969 sshd[1944]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:10.378385 systemd-logind[1727]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:15:10.378981 systemd[1]: sshd@2-172.31.24.89:22-139.178.89.65:43286.service: Deactivated successfully. Oct 2 19:15:10.380189 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:15:10.381693 systemd-logind[1727]: Removed session 3. Oct 2 19:15:10.405019 systemd[1]: Started sshd@3-172.31.24.89:22-139.178.89.65:43298.service. Oct 2 19:15:10.592852 sshd[1950]: Accepted publickey for core from 139.178.89.65 port 43298 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:10.595462 sshd[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:10.604831 systemd[1]: Started session-4.scope. Oct 2 19:15:10.606581 systemd-logind[1727]: New session 4 of user core. Oct 2 19:15:10.757649 sshd[1950]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:10.763290 systemd[1]: sshd@3-172.31.24.89:22-139.178.89.65:43298.service: Deactivated successfully. Oct 2 19:15:10.764464 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:15:10.765687 systemd-logind[1727]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:15:10.767091 systemd-logind[1727]: Removed session 4. Oct 2 19:15:10.786159 systemd[1]: Started sshd@4-172.31.24.89:22-139.178.89.65:43304.service. Oct 2 19:15:10.966287 sshd[1956]: Accepted publickey for core from 139.178.89.65 port 43304 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:10.969538 sshd[1956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:10.978275 systemd-logind[1727]: New session 5 of user core. Oct 2 19:15:10.978713 systemd[1]: Started session-5.scope. Oct 2 19:15:11.146057 sudo[1959]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:15:11.147090 sudo[1959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:11.160394 dbus-daemon[1718]: avc: received setenforce notice (enforcing=1) Oct 2 19:15:11.163508 sudo[1959]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:11.188739 sshd[1956]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:11.195003 systemd-logind[1727]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:15:11.196449 systemd[1]: sshd@4-172.31.24.89:22-139.178.89.65:43304.service: Deactivated successfully. Oct 2 19:15:11.197725 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:15:11.199297 systemd-logind[1727]: Removed session 5. Oct 2 19:15:11.221829 systemd[1]: Started sshd@5-172.31.24.89:22-139.178.89.65:43306.service. Oct 2 19:15:11.410170 sshd[1963]: Accepted publickey for core from 139.178.89.65 port 43306 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:11.413502 sshd[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:11.421646 systemd-logind[1727]: New session 6 of user core. Oct 2 19:15:11.422707 systemd[1]: Started session-6.scope. Oct 2 19:15:11.544921 sudo[1967]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:15:11.545479 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:11.552991 sudo[1967]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:11.566860 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:15:11.567649 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:11.592019 systemd[1]: Stopping audit-rules.service... Oct 2 19:15:11.596000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:15:11.598540 kernel: kauditd_printk_skb: 69 callbacks suppressed Oct 2 19:15:11.598608 kernel: audit: type=1305 audit(1696274111.596:157): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:15:11.599326 auditctl[1970]: No rules Oct 2 19:15:11.604375 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:15:11.596000 audit[1970]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec8d85b0 a2=420 a3=0 items=0 ppid=1 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:11.617162 kernel: audit: type=1300 audit(1696274111.596:157): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffec8d85b0 a2=420 a3=0 items=0 ppid=1 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:11.604739 systemd[1]: Stopped audit-rules.service. Oct 2 19:15:11.616575 systemd[1]: Starting audit-rules.service... Oct 2 19:15:11.596000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:15:11.622272 kernel: audit: type=1327 audit(1696274111.596:157): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:15:11.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.630550 kernel: audit: type=1131 audit(1696274111.604:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.681821 augenrules[1987]: No rules Oct 2 19:15:11.683709 systemd[1]: Finished audit-rules.service. Oct 2 19:15:11.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.693771 sudo[1966]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:11.692000 audit[1966]: USER_END pid=1966 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.703303 kernel: audit: type=1130 audit(1696274111.682:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.703366 kernel: audit: type=1106 audit(1696274111.692:160): pid=1966 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.703409 kernel: audit: type=1104 audit(1696274111.692:161): pid=1966 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.692000 audit[1966]: CRED_DISP pid=1966 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.717742 sshd[1963]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:11.718000 audit[1963]: USER_END pid=1963 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.718000 audit[1963]: CRED_DISP pid=1963 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.742337 kernel: audit: type=1106 audit(1696274111.718:162): pid=1963 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.742423 kernel: audit: type=1104 audit(1696274111.718:163): pid=1963 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.742936 systemd[1]: sshd@5-172.31.24.89:22-139.178.89.65:43306.service: Deactivated successfully. Oct 2 19:15:11.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.89:22-139.178.89.65:43306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.744167 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:15:11.753283 kernel: audit: type=1131 audit(1696274111.742:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.24.89:22-139.178.89.65:43306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.754489 systemd-logind[1727]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:15:11.763473 systemd-logind[1727]: Removed session 6. Oct 2 19:15:11.768525 systemd[1]: Started sshd@6-172.31.24.89:22-139.178.89.65:43316.service. Oct 2 19:15:11.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.89:22-139.178.89.65:43316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.950000 audit[1993]: USER_ACCT pid=1993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.951439 sshd[1993]: Accepted publickey for core from 139.178.89.65 port 43316 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:11.952000 audit[1993]: CRED_ACQ pid=1993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.953000 audit[1993]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe65f5f10 a2=3 a3=1 items=0 ppid=1 pid=1993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:11.953000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:15:11.955077 sshd[1993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:11.963465 systemd-logind[1727]: New session 7 of user core. Oct 2 19:15:11.964436 systemd[1]: Started session-7.scope. Oct 2 19:15:11.972000 audit[1993]: USER_START pid=1993 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:11.978000 audit[1995]: CRED_ACQ pid=1995 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:12.082000 audit[1996]: USER_ACCT pid=1996 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:12.083749 sudo[1996]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:15:12.083000 audit[1996]: CRED_REFR pid=1996 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:12.084969 sudo[1996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:12.088000 audit[1996]: USER_START pid=1996 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:12.758210 systemd[1]: Reloading. Oct 2 19:15:12.939927 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2023-10-02T19:15:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:12.940503 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2023-10-02T19:15:12Z" level=info msg="torcx already run" Oct 2 19:15:13.196263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:13.196303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:13.239077 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit: BPF prog-id=34 op=LOAD Oct 2 19:15:13.388000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit: BPF prog-id=35 op=LOAD Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.389000 audit: BPF prog-id=36 op=LOAD Oct 2 19:15:13.389000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:15:13.389000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:15:13.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit: BPF prog-id=37 op=LOAD Oct 2 19:15:13.390000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit: BPF prog-id=38 op=LOAD Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.390000 audit: BPF prog-id=39 op=LOAD Oct 2 19:15:13.390000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:15:13.390000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit: BPF prog-id=40 op=LOAD Oct 2 19:15:13.391000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit: BPF prog-id=41 op=LOAD Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit: BPF prog-id=42 op=LOAD Oct 2 19:15:13.392000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:15:13.392000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.393000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.393000 audit: BPF prog-id=43 op=LOAD Oct 2 19:15:13.393000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit: BPF prog-id=44 op=LOAD Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.399000 audit: BPF prog-id=45 op=LOAD Oct 2 19:15:13.399000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:15:13.399000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.400000 audit: BPF prog-id=46 op=LOAD Oct 2 19:15:13.400000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.403000 audit: BPF prog-id=47 op=LOAD Oct 2 19:15:13.403000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.405000 audit: BPF prog-id=48 op=LOAD Oct 2 19:15:13.405000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.408000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit: BPF prog-id=49 op=LOAD Oct 2 19:15:13.409000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit: BPF prog-id=50 op=LOAD Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.409000 audit: BPF prog-id=51 op=LOAD Oct 2 19:15:13.409000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:15:13.409000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:15:13.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:13.429557 systemd[1]: Started kubelet.service. Oct 2 19:15:13.462416 systemd[1]: Starting coreos-metadata.service... Oct 2 19:15:13.612828 kubelet[2080]: E1002 19:15:13.612747 2080 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:15:13.617344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:15:13.617669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:15:13.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:15:13.656517 coreos-metadata[2088]: Oct 02 19:15:13.656 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:15:13.657878 coreos-metadata[2088]: Oct 02 19:15:13.657 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:15:13.658635 coreos-metadata[2088]: Oct 02 19:15:13.658 INFO Fetch successful Oct 2 19:15:13.658635 coreos-metadata[2088]: Oct 02 19:15:13.658 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:15:13.659585 coreos-metadata[2088]: Oct 02 19:15:13.659 INFO Fetch successful Oct 2 19:15:13.659585 coreos-metadata[2088]: Oct 02 19:15:13.659 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:15:13.660142 coreos-metadata[2088]: Oct 02 19:15:13.659 INFO Fetch successful Oct 2 19:15:13.660142 coreos-metadata[2088]: Oct 02 19:15:13.660 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:15:13.660893 coreos-metadata[2088]: Oct 02 19:15:13.660 INFO Fetch successful Oct 2 19:15:13.660893 coreos-metadata[2088]: Oct 02 19:15:13.660 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:15:13.661657 coreos-metadata[2088]: Oct 02 19:15:13.661 INFO Fetch successful Oct 2 19:15:13.661657 coreos-metadata[2088]: Oct 02 19:15:13.661 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:15:13.662433 coreos-metadata[2088]: Oct 02 19:15:13.662 INFO Fetch successful Oct 2 19:15:13.662433 coreos-metadata[2088]: Oct 02 19:15:13.662 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:15:13.663325 coreos-metadata[2088]: Oct 02 19:15:13.663 INFO Fetch successful Oct 2 19:15:13.663325 coreos-metadata[2088]: Oct 02 19:15:13.663 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:15:13.664178 coreos-metadata[2088]: Oct 02 19:15:13.663 INFO Fetch successful Oct 2 19:15:13.687745 systemd[1]: Finished coreos-metadata.service. Oct 2 19:15:13.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:14.734991 systemd[1]: Stopped kubelet.service. Oct 2 19:15:14.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:14.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:14.785408 systemd[1]: Reloading. Oct 2 19:15:15.002691 /usr/lib/systemd/system-generators/torcx-generator[2145]: time="2023-10-02T19:15:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:15.002751 /usr/lib/systemd/system-generators/torcx-generator[2145]: time="2023-10-02T19:15:15Z" level=info msg="torcx already run" Oct 2 19:15:15.192370 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:15.192415 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:15.235513 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.384000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit: BPF prog-id=52 op=LOAD Oct 2 19:15:15.385000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit: BPF prog-id=53 op=LOAD Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.385000 audit: BPF prog-id=54 op=LOAD Oct 2 19:15:15.385000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:15:15.385000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.386000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit: BPF prog-id=55 op=LOAD Oct 2 19:15:15.387000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit: BPF prog-id=56 op=LOAD Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.387000 audit: BPF prog-id=57 op=LOAD Oct 2 19:15:15.387000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:15:15.387000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit: BPF prog-id=58 op=LOAD Oct 2 19:15:15.388000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit: BPF prog-id=59 op=LOAD Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.388000 audit: BPF prog-id=60 op=LOAD Oct 2 19:15:15.389000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:15:15.389000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.390000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.390000 audit: BPF prog-id=61 op=LOAD Oct 2 19:15:15.390000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit: BPF prog-id=62 op=LOAD Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.396000 audit: BPF prog-id=63 op=LOAD Oct 2 19:15:15.396000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:15:15.396000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.397000 audit: BPF prog-id=64 op=LOAD Oct 2 19:15:15.397000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.399000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.400000 audit: BPF prog-id=65 op=LOAD Oct 2 19:15:15.400000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.402000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.403000 audit: BPF prog-id=66 op=LOAD Oct 2 19:15:15.403000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.405000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit: BPF prog-id=67 op=LOAD Oct 2 19:15:15.406000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit: BPF prog-id=68 op=LOAD Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:15.406000 audit: BPF prog-id=69 op=LOAD Oct 2 19:15:15.406000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:15:15.406000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:15:15.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:15.443902 systemd[1]: Started kubelet.service. Oct 2 19:15:15.578436 kubelet[2201]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:15:15.578996 kubelet[2201]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:15:15.578996 kubelet[2201]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:15:15.579337 kubelet[2201]: I1002 19:15:15.579254 2201 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:15:16.446564 kubelet[2201]: I1002 19:15:16.446523 2201 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:15:16.446764 kubelet[2201]: I1002 19:15:16.446743 2201 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:15:16.447271 kubelet[2201]: I1002 19:15:16.447245 2201 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:15:16.454907 kubelet[2201]: I1002 19:15:16.454857 2201 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:15:16.469157 kubelet[2201]: W1002 19:15:16.469098 2201 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:15:16.470617 kubelet[2201]: I1002 19:15:16.470578 2201 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:15:16.471366 kubelet[2201]: I1002 19:15:16.471339 2201 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:15:16.471873 kubelet[2201]: I1002 19:15:16.471837 2201 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:15:16.472264 kubelet[2201]: I1002 19:15:16.472217 2201 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:15:16.472435 kubelet[2201]: I1002 19:15:16.472413 2201 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:15:16.472756 kubelet[2201]: I1002 19:15:16.472735 2201 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:15:16.473981 kubelet[2201]: I1002 19:15:16.473955 2201 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:15:16.474268 kubelet[2201]: I1002 19:15:16.474245 2201 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:15:16.474399 kubelet[2201]: I1002 19:15:16.474377 2201 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:15:16.474588 kubelet[2201]: I1002 19:15:16.474565 2201 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:15:16.474788 kubelet[2201]: E1002 19:15:16.474595 2201 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:16.474788 kubelet[2201]: E1002 19:15:16.474699 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:16.476631 kubelet[2201]: I1002 19:15:16.476581 2201 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:15:16.477845 kubelet[2201]: W1002 19:15:16.477813 2201 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:15:16.479957 kubelet[2201]: I1002 19:15:16.479910 2201 server.go:1232] "Started kubelet" Oct 2 19:15:16.482264 kubelet[2201]: E1002 19:15:16.481577 2201 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:15:16.482264 kubelet[2201]: E1002 19:15:16.481628 2201 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:15:16.482503 kubelet[2201]: I1002 19:15:16.482327 2201 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:15:16.482873 kubelet[2201]: I1002 19:15:16.482830 2201 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:15:16.482964 kubelet[2201]: I1002 19:15:16.482925 2201 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:15:16.482000 audit[2201]: AVC avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:16.484176 kubelet[2201]: I1002 19:15:16.484068 2201 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:15:16.482000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:16.482000 audit[2201]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400084c540 a1=4000cf33b0 a2=400084c300 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.482000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:16.484813 kubelet[2201]: I1002 19:15:16.484785 2201 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:15:16.483000 audit[2201]: AVC avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:16.483000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:16.483000 audit[2201]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40001f1b00 a1=4000cf33c8 a2=400084c600 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.483000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:16.485474 kubelet[2201]: I1002 19:15:16.485448 2201 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:15:16.485754 kubelet[2201]: I1002 19:15:16.485730 2201 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:15:16.491788 kubelet[2201]: I1002 19:15:16.491737 2201 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:15:16.494925 kubelet[2201]: I1002 19:15:16.491922 2201 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:15:16.494925 kubelet[2201]: I1002 19:15:16.493712 2201 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:15:16.498834 kubelet[2201]: E1002 19:15:16.497418 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525a78fb7c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 479875964, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 479875964, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.498834 kubelet[2201]: W1002 19:15:16.497987 2201 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:16.498834 kubelet[2201]: E1002 19:15:16.498045 2201 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:16.499205 kubelet[2201]: W1002 19:15:16.498172 2201 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.24.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:16.499205 kubelet[2201]: E1002 19:15:16.498218 2201 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.24.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:16.499205 kubelet[2201]: W1002 19:15:16.498418 2201 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:16.499205 kubelet[2201]: E1002 19:15:16.498466 2201 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:16.499205 kubelet[2201]: E1002 19:15:16.498579 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.24.89\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:15:16.499484 kubelet[2201]: E1002 19:15:16.498845 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525a937307", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 481610503, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 481610503, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.551612 kubelet[2201]: E1002 19:15:16.551493 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea4f005", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.24.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549865477, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549865477, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.552303 kubelet[2201]: I1002 19:15:16.552275 2201 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:15:16.552480 kubelet[2201]: I1002 19:15:16.552458 2201 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:15:16.552598 kubelet[2201]: I1002 19:15:16.552577 2201 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:15:16.553414 kubelet[2201]: E1002 19:15:16.553278 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea50c82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.24.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549872770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549872770, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.555248 kubelet[2201]: E1002 19:15:16.555099 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea5307c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.24.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549881980, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549881980, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.555699 kubelet[2201]: I1002 19:15:16.555618 2201 policy_none.go:49] "None policy: Start" Oct 2 19:15:16.556893 kubelet[2201]: I1002 19:15:16.556865 2201 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:15:16.557086 kubelet[2201]: I1002 19:15:16.557065 2201 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:15:16.563000 audit[2216]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.563000 audit[2216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd5bb5e70 a2=0 a3=1 items=0 ppid=2201 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:15:16.568215 systemd[1]: Created slice kubepods.slice. Oct 2 19:15:16.569000 audit[2218]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.569000 audit[2218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe0418060 a2=0 a3=1 items=0 ppid=2201 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:15:16.578723 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:15:16.585184 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:15:16.593320 kubelet[2201]: I1002 19:15:16.593287 2201 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:15:16.592000 audit[2201]: AVC avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:16.592000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:16.594242 kubelet[2201]: I1002 19:15:16.593310 2201 kubelet_node_status.go:70] "Attempting to register node" node="172.31.24.89" Oct 2 19:15:16.592000 audit[2201]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f819e0 a1=4000fc8870 a2=4000f819b0 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.595858 kubelet[2201]: E1002 19:15:16.595733 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea4f005", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.24.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549865477, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 593255072, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea4f005" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.597517 kubelet[2201]: E1002 19:15:16.597464 2201 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.24.89" Oct 2 19:15:16.598217 kubelet[2201]: E1002 19:15:16.598038 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea50c82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.24.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549872770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 593262881, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea50c82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.607167 kernel: kauditd_printk_skb: 446 callbacks suppressed Oct 2 19:15:16.607269 kernel: audit: type=1327 audit(1696274116.592:596): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:16.592000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:16.612268 kubelet[2201]: I1002 19:15:16.612220 2201 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:15:16.612648 kubelet[2201]: I1002 19:15:16.612590 2201 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:15:16.613037 kubelet[2201]: E1002 19:15:16.612892 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea5307c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.24.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549881980, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 593267455, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea5307c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.615639 kubelet[2201]: E1002 19:15:16.613973 2201 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.24.89\" not found" Oct 2 19:15:16.616710 kubelet[2201]: E1002 19:15:16.616586 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a605261792bd9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 597328857, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 597328857, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.597000 audit[2221]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.597000 audit[2221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd6446bd0 a2=0 a3=1 items=0 ppid=2201 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.654833 kernel: audit: type=1325 audit(1696274116.597:597): table=filter:4 family=2 entries=2 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.654954 kernel: audit: type=1300 audit(1696274116.597:597): arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd6446bd0 a2=0 a3=1 items=0 ppid=2201 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.655000 kernel: audit: type=1327 audit(1696274116.597:597): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:16.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:16.647000 audit[2227]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.666930 kernel: audit: type=1325 audit(1696274116.647:598): table=filter:5 family=2 entries=2 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.667020 kernel: audit: type=1300 audit(1696274116.647:598): arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdd064730 a2=0 a3=1 items=0 ppid=2201 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.647000 audit[2227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdd064730 a2=0 a3=1 items=0 ppid=2201 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.678503 kernel: audit: type=1327 audit(1696274116.647:598): proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:16.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:16.700840 kubelet[2201]: E1002 19:15:16.700716 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.24.89\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:15:16.736000 audit[2232]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.736000 audit[2232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdfcce2b0 a2=0 a3=1 items=0 ppid=2201 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.743824 kubelet[2201]: I1002 19:15:16.743791 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:15:16.752783 kubelet[2201]: I1002 19:15:16.752750 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:15:16.753057 kubelet[2201]: I1002 19:15:16.753034 2201 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:15:16.753418 kubelet[2201]: I1002 19:15:16.753397 2201 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:15:16.753659 kubelet[2201]: E1002 19:15:16.753639 2201 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:15:16.754494 kernel: audit: type=1325 audit(1696274116.736:599): table=filter:6 family=2 entries=1 op=nft_register_rule pid=2232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.754592 kernel: audit: type=1300 audit(1696274116.736:599): arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdfcce2b0 a2=0 a3=1 items=0 ppid=2201 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:15:16.765077 kernel: audit: type=1327 audit(1696274116.736:599): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:15:16.747000 audit[2233]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=2233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:16.747000 audit[2233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe90dce30 a2=0 a3=1 items=0 ppid=2201 pid=2233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:15:16.766549 kubelet[2201]: W1002 19:15:16.766514 2201 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:16.766784 kubelet[2201]: E1002 19:15:16.766742 2201 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:16.764000 audit[2234]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=2234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.764000 audit[2234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe29ad60 a2=0 a3=1 items=0 ppid=2201 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:15:16.771000 audit[2235]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=2235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:16.771000 audit[2235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb88e820 a2=0 a3=1 items=0 ppid=2201 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:15:16.772000 audit[2236]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.772000 audit[2236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffeeec3ac0 a2=0 a3=1 items=0 ppid=2201 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.772000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:16.776000 audit[2237]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=2237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:16.776000 audit[2237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd9108360 a2=0 a3=1 items=0 ppid=2201 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:16.777000 audit[2238]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=2238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:16.777000 audit[2238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee1be710 a2=0 a3=1 items=0 ppid=2201 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.777000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:16.780000 audit[2239]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=2239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:16.780000 audit[2239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe5f2a6a0 a2=0 a3=1 items=0 ppid=2201 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:16.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:16.799615 kubelet[2201]: I1002 19:15:16.799573 2201 kubelet_node_status.go:70] "Attempting to register node" node="172.31.24.89" Oct 2 19:15:16.801628 kubelet[2201]: E1002 19:15:16.801571 2201 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.24.89" Oct 2 19:15:16.801871 kubelet[2201]: E1002 19:15:16.801752 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea4f005", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.24.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549865477, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 799447395, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea4f005" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.803478 kubelet[2201]: E1002 19:15:16.803369 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea50c82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.24.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549872770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 799457659, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea50c82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.805012 kubelet[2201]: E1002 19:15:16.804899 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea5307c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.24.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549881980, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 799499239, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea5307c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:17.104028 kubelet[2201]: E1002 19:15:17.103193 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.24.89\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:15:17.202992 kubelet[2201]: I1002 19:15:17.202947 2201 kubelet_node_status.go:70] "Attempting to register node" node="172.31.24.89" Oct 2 19:15:17.204467 kubelet[2201]: E1002 19:15:17.204433 2201 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.24.89" Oct 2 19:15:17.204965 kubelet[2201]: E1002 19:15:17.204862 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea4f005", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.24.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549865477, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 17, 202869102, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea4f005" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:17.207806 kubelet[2201]: E1002 19:15:17.207706 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea50c82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.24.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549872770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 17, 202876876, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea50c82" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:17.210036 kubelet[2201]: E1002 19:15:17.209938 2201 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.24.89.178a60525ea5307c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.24.89", UID:"172.31.24.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.24.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.24.89"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 549881980, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 17, 202881728, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.24.89"}': 'events "172.31.24.89.178a60525ea5307c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:17.364343 kubelet[2201]: W1002 19:15:17.364207 2201 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:17.364343 kubelet[2201]: E1002 19:15:17.364258 2201 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:17.451773 kubelet[2201]: I1002 19:15:17.450240 2201 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:15:17.475051 kubelet[2201]: E1002 19:15:17.475007 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:17.858334 kubelet[2201]: E1002 19:15:17.858280 2201 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.24.89" not found Oct 2 19:15:17.910405 kubelet[2201]: E1002 19:15:17.910370 2201 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.24.89\" not found" node="172.31.24.89" Oct 2 19:15:18.006731 kubelet[2201]: I1002 19:15:18.006697 2201 kubelet_node_status.go:70] "Attempting to register node" node="172.31.24.89" Oct 2 19:15:18.012375 kubelet[2201]: I1002 19:15:18.012336 2201 kubelet_node_status.go:73] "Successfully registered node" node="172.31.24.89" Oct 2 19:15:18.138223 kubelet[2201]: I1002 19:15:18.138090 2201 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:15:18.139064 env[1736]: time="2023-10-02T19:15:18.139007412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:15:18.139636 kubelet[2201]: I1002 19:15:18.139524 2201 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:15:18.467000 audit[1996]: USER_END pid=1996 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.468637 sudo[1996]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:18.468000 audit[1996]: CRED_DISP pid=1996 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.476071 kubelet[2201]: E1002 19:15:18.476015 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:18.476357 kubelet[2201]: I1002 19:15:18.476061 2201 apiserver.go:52] "Watching apiserver" Oct 2 19:15:18.480477 kubelet[2201]: I1002 19:15:18.480403 2201 topology_manager.go:215] "Topology Admit Handler" podUID="151deab3-835b-44f8-842a-07e41c18fb22" podNamespace="kube-system" podName="cilium-567xl" Oct 2 19:15:18.480882 kubelet[2201]: I1002 19:15:18.480855 2201 topology_manager.go:215] "Topology Admit Handler" podUID="e9a7e493-101e-471b-9787-4d3130e18dab" podNamespace="kube-system" podName="kube-proxy-7gpl2" Oct 2 19:15:18.491230 systemd[1]: Created slice kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice. Oct 2 19:15:18.493198 sshd[1993]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:18.495529 kubelet[2201]: I1002 19:15:18.495488 2201 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:15:18.496000 audit[1993]: USER_END pid=1993 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:18.496000 audit[1993]: CRED_DISP pid=1993 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:18.500530 systemd[1]: sshd@6-172.31.24.89:22-139.178.89.65:43316.service: Deactivated successfully. Oct 2 19:15:18.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.24.89:22-139.178.89.65:43316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:18.501820 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:15:18.504018 systemd-logind[1727]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:15:18.505875 systemd-logind[1727]: Removed session 7. Oct 2 19:15:18.508729 kubelet[2201]: I1002 19:15:18.505009 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-etc-cni-netd\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.508729 kubelet[2201]: I1002 19:15:18.505087 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-kernel\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.508729 kubelet[2201]: I1002 19:15:18.505192 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9a7e493-101e-471b-9787-4d3130e18dab-kube-proxy\") pod \"kube-proxy-7gpl2\" (UID: \"e9a7e493-101e-471b-9787-4d3130e18dab\") " pod="kube-system/kube-proxy-7gpl2" Oct 2 19:15:18.508729 kubelet[2201]: I1002 19:15:18.505247 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hn8h\" (UniqueName: \"kubernetes.io/projected/e9a7e493-101e-471b-9787-4d3130e18dab-kube-api-access-4hn8h\") pod \"kube-proxy-7gpl2\" (UID: \"e9a7e493-101e-471b-9787-4d3130e18dab\") " pod="kube-system/kube-proxy-7gpl2" Oct 2 19:15:18.508729 kubelet[2201]: I1002 19:15:18.505296 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-hostproc\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.508729 kubelet[2201]: I1002 19:15:18.505344 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-lib-modules\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509072 kubelet[2201]: I1002 19:15:18.505389 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9a7e493-101e-471b-9787-4d3130e18dab-lib-modules\") pod \"kube-proxy-7gpl2\" (UID: \"e9a7e493-101e-471b-9787-4d3130e18dab\") " pod="kube-system/kube-proxy-7gpl2" Oct 2 19:15:18.509072 kubelet[2201]: I1002 19:15:18.505433 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-bpf-maps\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509072 kubelet[2201]: I1002 19:15:18.505475 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cni-path\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509072 kubelet[2201]: I1002 19:15:18.505519 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-xtables-lock\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509072 kubelet[2201]: I1002 19:15:18.505562 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/151deab3-835b-44f8-842a-07e41c18fb22-cilium-config-path\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509072 kubelet[2201]: I1002 19:15:18.505616 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-net\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509442 kubelet[2201]: I1002 19:15:18.505657 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-hubble-tls\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509442 kubelet[2201]: I1002 19:15:18.505707 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9a7e493-101e-471b-9787-4d3130e18dab-xtables-lock\") pod \"kube-proxy-7gpl2\" (UID: \"e9a7e493-101e-471b-9787-4d3130e18dab\") " pod="kube-system/kube-proxy-7gpl2" Oct 2 19:15:18.509442 kubelet[2201]: I1002 19:15:18.505749 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-cgroup\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509442 kubelet[2201]: I1002 19:15:18.505791 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4sp\" (UniqueName: \"kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-kube-api-access-6d4sp\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509442 kubelet[2201]: I1002 19:15:18.505844 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-run\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.509442 kubelet[2201]: I1002 19:15:18.505890 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/151deab3-835b-44f8-842a-07e41c18fb22-clustermesh-secrets\") pod \"cilium-567xl\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " pod="kube-system/cilium-567xl" Oct 2 19:15:18.529717 systemd[1]: Created slice kubepods-besteffort-pode9a7e493_101e_471b_9787_4d3130e18dab.slice. Oct 2 19:15:18.828111 env[1736]: time="2023-10-02T19:15:18.827937233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-567xl,Uid:151deab3-835b-44f8-842a-07e41c18fb22,Namespace:kube-system,Attempt:0,}" Oct 2 19:15:18.841420 env[1736]: time="2023-10-02T19:15:18.841340711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7gpl2,Uid:e9a7e493-101e-471b-9787-4d3130e18dab,Namespace:kube-system,Attempt:0,}" Oct 2 19:15:19.406030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974781404.mount: Deactivated successfully. Oct 2 19:15:19.418323 env[1736]: time="2023-10-02T19:15:19.418264215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.420609 env[1736]: time="2023-10-02T19:15:19.420563372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.424915 env[1736]: time="2023-10-02T19:15:19.424866307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.427172 env[1736]: time="2023-10-02T19:15:19.427085484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.431439 env[1736]: time="2023-10-02T19:15:19.431374541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.432853 env[1736]: time="2023-10-02T19:15:19.432812100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.437054 env[1736]: time="2023-10-02T19:15:19.436994237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.439993 env[1736]: time="2023-10-02T19:15:19.439875994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:19.476908 kubelet[2201]: E1002 19:15:19.476848 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:19.492387 env[1736]: time="2023-10-02T19:15:19.492226341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:15:19.492387 env[1736]: time="2023-10-02T19:15:19.492322393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:15:19.492387 env[1736]: time="2023-10-02T19:15:19.492350243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:15:19.492842 env[1736]: time="2023-10-02T19:15:19.492728217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:15:19.492925 env[1736]: time="2023-10-02T19:15:19.492876303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:15:19.493100 env[1736]: time="2023-10-02T19:15:19.492969095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:15:19.493470 env[1736]: time="2023-10-02T19:15:19.493335516Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269 pid=2263 runtime=io.containerd.runc.v2 Oct 2 19:15:19.493470 env[1736]: time="2023-10-02T19:15:19.493330495Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ae1b6dd55e6b1b66c7c00f271c591d28cf30e33bcbb97c759d60ebe673ab459 pid=2259 runtime=io.containerd.runc.v2 Oct 2 19:15:19.528714 systemd[1]: Started cri-containerd-1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269.scope. Oct 2 19:15:19.539557 systemd[1]: Started cri-containerd-6ae1b6dd55e6b1b66c7c00f271c591d28cf30e33bcbb97c759d60ebe673ab459.scope. Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.591000 audit: BPF prog-id=70 op=LOAD Oct 2 19:15:19.592000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.592000 audit[2277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=2263 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346334333035643436623033646234316663613162663037663239 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=2263 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346334333035643436623033646234316663613162663037663239 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.593000 audit: BPF prog-id=71 op=LOAD Oct 2 19:15:19.593000 audit[2277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=2263 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346334333035643436623033646234316663613162663037663239 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.595000 audit: BPF prog-id=72 op=LOAD Oct 2 19:15:19.595000 audit[2277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=2263 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346334333035643436623033646234316663613162663037663239 Oct 2 19:15:19.596000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:15:19.597000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { perfmon } for pid=2277 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.598000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.599000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.599000 audit: BPF prog-id=73 op=LOAD Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2259 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653162366464353565366231623636633763303066323731633539 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2259 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653162366464353565366231623636633763303066323731633539 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.600000 audit: BPF prog-id=74 op=LOAD Oct 2 19:15:19.600000 audit[2280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2259 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653162366464353565366231623636633763303066323731633539 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit: BPF prog-id=75 op=LOAD Oct 2 19:15:19.597000 audit[2277]: AVC avc: denied { bpf } for pid=2277 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.602000 audit[2280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2259 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653162366464353565366231623636633763303066323731633539 Oct 2 19:15:19.604000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:15:19.604000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:15:19.597000 audit: BPF prog-id=76 op=LOAD Oct 2 19:15:19.597000 audit[2277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=2263 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166346334333035643436623033646234316663613162663037663239 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { perfmon } for pid=2280 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit[2280]: AVC avc: denied { bpf } for pid=2280 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:19.605000 audit: BPF prog-id=77 op=LOAD Oct 2 19:15:19.605000 audit[2280]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2259 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:19.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653162366464353565366231623636633763303066323731633539 Oct 2 19:15:19.663893 env[1736]: time="2023-10-02T19:15:19.661527920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-567xl,Uid:151deab3-835b-44f8-842a-07e41c18fb22,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\"" Oct 2 19:15:19.670082 env[1736]: time="2023-10-02T19:15:19.670024352Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:15:19.672874 env[1736]: time="2023-10-02T19:15:19.672817668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7gpl2,Uid:e9a7e493-101e-471b-9787-4d3130e18dab,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae1b6dd55e6b1b66c7c00f271c591d28cf30e33bcbb97c759d60ebe673ab459\"" Oct 2 19:15:20.477068 kubelet[2201]: E1002 19:15:20.477025 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:21.478045 kubelet[2201]: E1002 19:15:21.477978 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:22.478712 kubelet[2201]: E1002 19:15:22.478646 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.479671 kubelet[2201]: E1002 19:15:23.479629 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:24.480867 kubelet[2201]: E1002 19:15:24.480726 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:25.481856 kubelet[2201]: E1002 19:15:25.481725 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.482231 kubelet[2201]: E1002 19:15:26.482158 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.888324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387732844.mount: Deactivated successfully. Oct 2 19:15:27.483034 kubelet[2201]: E1002 19:15:27.482961 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:28.483723 kubelet[2201]: E1002 19:15:28.483610 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:29.484046 kubelet[2201]: E1002 19:15:29.484005 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:30.486732 kubelet[2201]: E1002 19:15:30.486673 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:30.784834 env[1736]: time="2023-10-02T19:15:30.784426222Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:30.789157 env[1736]: time="2023-10-02T19:15:30.789062470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:30.793529 env[1736]: time="2023-10-02T19:15:30.793466602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:30.794786 env[1736]: time="2023-10-02T19:15:30.794735032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:15:30.797182 env[1736]: time="2023-10-02T19:15:30.797084964Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:15:30.799094 env[1736]: time="2023-10-02T19:15:30.799015974Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:15:30.826073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975726765.mount: Deactivated successfully. Oct 2 19:15:30.835243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766796961.mount: Deactivated successfully. Oct 2 19:15:30.843641 env[1736]: time="2023-10-02T19:15:30.843574911Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\"" Oct 2 19:15:30.845044 env[1736]: time="2023-10-02T19:15:30.844972708Z" level=info msg="StartContainer for \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\"" Oct 2 19:15:30.898018 systemd[1]: Started cri-containerd-33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81.scope. Oct 2 19:15:30.933749 systemd[1]: cri-containerd-33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81.scope: Deactivated successfully. Oct 2 19:15:31.487078 kubelet[2201]: E1002 19:15:31.487018 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:31.816644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81-rootfs.mount: Deactivated successfully. Oct 2 19:15:32.261362 env[1736]: time="2023-10-02T19:15:32.261288755Z" level=info msg="shim disconnected" id=33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81 Oct 2 19:15:32.262041 env[1736]: time="2023-10-02T19:15:32.261983127Z" level=warning msg="cleaning up after shim disconnected" id=33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81 namespace=k8s.io Oct 2 19:15:32.262216 env[1736]: time="2023-10-02T19:15:32.262184523Z" level=info msg="cleaning up dead shim" Oct 2 19:15:32.289832 env[1736]: time="2023-10-02T19:15:32.289762315Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2355 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:32.290778 env[1736]: time="2023-10-02T19:15:32.290622596Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:15:32.293303 env[1736]: time="2023-10-02T19:15:32.293237380Z" level=error msg="Failed to pipe stdout of container \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\"" error="reading from a closed fifo" Oct 2 19:15:32.293609 env[1736]: time="2023-10-02T19:15:32.293506009Z" level=error msg="Failed to pipe stderr of container \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\"" error="reading from a closed fifo" Oct 2 19:15:32.295719 env[1736]: time="2023-10-02T19:15:32.295612268Z" level=error msg="StartContainer for \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:32.296209 kubelet[2201]: E1002 19:15:32.296169 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81" Oct 2 19:15:32.296854 kubelet[2201]: E1002 19:15:32.296595 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:32.296854 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:32.296854 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:15:32.297202 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6d4sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:32.297202 kubelet[2201]: E1002 19:15:32.296679 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:15:32.487847 kubelet[2201]: E1002 19:15:32.487771 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:32.820894 env[1736]: time="2023-10-02T19:15:32.820781546Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:15:32.849880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169491407.mount: Deactivated successfully. Oct 2 19:15:32.862885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320085268.mount: Deactivated successfully. Oct 2 19:15:32.869773 env[1736]: time="2023-10-02T19:15:32.869698708Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\"" Oct 2 19:15:32.871003 env[1736]: time="2023-10-02T19:15:32.870936144Z" level=info msg="StartContainer for \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\"" Oct 2 19:15:32.941062 systemd[1]: Started cri-containerd-066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea.scope. Oct 2 19:15:32.984940 systemd[1]: cri-containerd-066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea.scope: Deactivated successfully. Oct 2 19:15:33.049397 env[1736]: time="2023-10-02T19:15:33.049312275Z" level=info msg="shim disconnected" id=066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea Oct 2 19:15:33.049397 env[1736]: time="2023-10-02T19:15:33.049390391Z" level=warning msg="cleaning up after shim disconnected" id=066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea namespace=k8s.io Oct 2 19:15:33.049698 env[1736]: time="2023-10-02T19:15:33.049412851Z" level=info msg="cleaning up dead shim" Oct 2 19:15:33.086419 env[1736]: time="2023-10-02T19:15:33.086246166Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2392 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:33.087306 env[1736]: time="2023-10-02T19:15:33.087211187Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 19:15:33.088319 env[1736]: time="2023-10-02T19:15:33.088252273Z" level=error msg="Failed to pipe stderr of container \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\"" error="reading from a closed fifo" Oct 2 19:15:33.088548 env[1736]: time="2023-10-02T19:15:33.088493087Z" level=error msg="Failed to pipe stdout of container \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\"" error="reading from a closed fifo" Oct 2 19:15:33.096670 env[1736]: time="2023-10-02T19:15:33.096595736Z" level=error msg="StartContainer for \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:33.097263 kubelet[2201]: E1002 19:15:33.097110 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea" Oct 2 19:15:33.097926 kubelet[2201]: E1002 19:15:33.097891 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:33.097926 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:33.097926 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:15:33.097926 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6d4sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:33.098522 kubelet[2201]: E1002 19:15:33.097963 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:15:33.488343 kubelet[2201]: E1002 19:15:33.488264 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.809189 kubelet[2201]: I1002 19:15:33.808290 2201 scope.go:117] "RemoveContainer" containerID="33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81" Oct 2 19:15:33.809189 kubelet[2201]: I1002 19:15:33.809017 2201 scope.go:117] "RemoveContainer" containerID="33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81" Oct 2 19:15:33.814839 env[1736]: time="2023-10-02T19:15:33.814781635Z" level=info msg="RemoveContainer for \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\"" Oct 2 19:15:33.819823 env[1736]: time="2023-10-02T19:15:33.819766547Z" level=info msg="RemoveContainer for \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\" returns successfully" Oct 2 19:15:33.820684 env[1736]: time="2023-10-02T19:15:33.820641563Z" level=info msg="RemoveContainer for \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\"" Oct 2 19:15:33.820876 env[1736]: time="2023-10-02T19:15:33.820841837Z" level=info msg="RemoveContainer for \"33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81\" returns successfully" Oct 2 19:15:33.822253 kubelet[2201]: E1002 19:15:33.821762 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:15:33.844824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea-rootfs.mount: Deactivated successfully. Oct 2 19:15:33.845003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611559021.mount: Deactivated successfully. Oct 2 19:15:34.101658 env[1736]: time="2023-10-02T19:15:34.101496436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.105104 env[1736]: time="2023-10-02T19:15:34.105039588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.107753 env[1736]: time="2023-10-02T19:15:34.107702736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.110063 env[1736]: time="2023-10-02T19:15:34.110017142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.111019 env[1736]: time="2023-10-02T19:15:34.110959884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa\"" Oct 2 19:15:34.113876 env[1736]: time="2023-10-02T19:15:34.113815597Z" level=info msg="CreateContainer within sandbox \"6ae1b6dd55e6b1b66c7c00f271c591d28cf30e33bcbb97c759d60ebe673ab459\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:15:34.133722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796059578.mount: Deactivated successfully. Oct 2 19:15:34.152983 env[1736]: time="2023-10-02T19:15:34.152921616Z" level=info msg="CreateContainer within sandbox \"6ae1b6dd55e6b1b66c7c00f271c591d28cf30e33bcbb97c759d60ebe673ab459\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23f996f2b8ffae1a75126ae8ae625d08619dbb3af48eb7a214edbc59ef156d4b\"" Oct 2 19:15:34.154512 env[1736]: time="2023-10-02T19:15:34.154444609Z" level=info msg="StartContainer for \"23f996f2b8ffae1a75126ae8ae625d08619dbb3af48eb7a214edbc59ef156d4b\"" Oct 2 19:15:34.197633 systemd[1]: Started cri-containerd-23f996f2b8ffae1a75126ae8ae625d08619dbb3af48eb7a214edbc59ef156d4b.scope. Oct 2 19:15:34.255345 kernel: kauditd_printk_skb: 140 callbacks suppressed Oct 2 19:15:34.255508 kernel: audit: type=1400 audit(1696274134.244:648): avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2259 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663939366632623866666165316137353132366165386165363235 Oct 2 19:15:34.280176 kernel: audit: type=1300 audit(1696274134.244:648): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2259 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.280340 kernel: audit: type=1327 audit(1696274134.244:648): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663939366632623866666165316137353132366165386165363235 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.288493 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.296625 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.304694 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.312848 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.320999 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.327238 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.337798 kernel: audit: type=1400 audit(1696274134.244:649): avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.244000 audit: BPF prog-id=78 op=LOAD Oct 2 19:15:34.244000 audit[2413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2259 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663939366632623866666165316137353132366165386165363235 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.246000 audit: BPF prog-id=79 op=LOAD Oct 2 19:15:34.246000 audit[2413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2259 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663939366632623866666165316137353132366165386165363235 Oct 2 19:15:34.254000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:15:34.254000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { perfmon } for pid=2413 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit[2413]: AVC avc: denied { bpf } for pid=2413 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.254000 audit: BPF prog-id=80 op=LOAD Oct 2 19:15:34.254000 audit[2413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2259 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233663939366632623866666165316137353132366165386165363235 Oct 2 19:15:34.344395 env[1736]: time="2023-10-02T19:15:34.344336947Z" level=info msg="StartContainer for \"23f996f2b8ffae1a75126ae8ae625d08619dbb3af48eb7a214edbc59ef156d4b\" returns successfully" Oct 2 19:15:34.488734 kubelet[2201]: E1002 19:15:34.488622 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:34.519000 audit[2464]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.519000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd8238d0 a2=0 a3=ffffa3afc6c0 items=0 ppid=2422 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.519000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:15:34.522000 audit[2465]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.522000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe071d3f0 a2=0 a3=ffffb8b876c0 items=0 ppid=2422 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:15:34.523000 audit[2466]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.523000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffced0df80 a2=0 a3=ffff994cd6c0 items=0 ppid=2422 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:15:34.525000 audit[2467]: NETFILTER_CFG table=nat:17 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.525000 audit[2467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee987120 a2=0 a3=ffffaaf446c0 items=0 ppid=2422 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:15:34.527000 audit[2468]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.527000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb5e5690 a2=0 a3=ffffb2f4d6c0 items=0 ppid=2422 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:15:34.529000 audit[2469]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.529000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0763240 a2=0 a3=ffffac5226c0 items=0 ppid=2422 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:15:34.620000 audit[2470]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.620000 audit[2470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd52110e0 a2=0 a3=ffff930b26c0 items=0 ppid=2422 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:15:34.628000 audit[2472]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.628000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff0176890 a2=0 a3=ffff8970d6c0 items=0 ppid=2422 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:15:34.641000 audit[2475]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.641000 audit[2475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcc7cb570 a2=0 a3=ffffb24786c0 items=0 ppid=2422 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:15:34.645000 audit[2476]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.645000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4859290 a2=0 a3=ffff882dc6c0 items=0 ppid=2422 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:15:34.654000 audit[2478]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.654000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff71d0e20 a2=0 a3=ffff94a4e6c0 items=0 ppid=2422 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:15:34.659000 audit[2479]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.659000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc16a9640 a2=0 a3=ffff95c016c0 items=0 ppid=2422 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:15:34.667000 audit[2481]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.667000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffdea9480 a2=0 a3=ffffbd64f6c0 items=0 ppid=2422 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:15:34.679000 audit[2484]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.679000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff33f6650 a2=0 a3=ffff9b1096c0 items=0 ppid=2422 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:15:34.684000 audit[2485]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.684000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbaa37e0 a2=0 a3=ffffb38fa6c0 items=0 ppid=2422 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:15:34.692000 audit[2487]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.692000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8bc0840 a2=0 a3=ffffb14e46c0 items=0 ppid=2422 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:15:34.695000 audit[2488]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.695000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3c28470 a2=0 a3=ffff83fad6c0 items=0 ppid=2422 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:15:34.704000 audit[2490]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.704000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3a24850 a2=0 a3=ffff9b2be6c0 items=0 ppid=2422 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:34.716000 audit[2493]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.716000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc5050ac0 a2=0 a3=ffff90d9b6c0 items=0 ppid=2422 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:34.728000 audit[2496]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.728000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3a79590 a2=0 a3=ffffa3ca46c0 items=0 ppid=2422 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:15:34.731000 audit[2497]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.731000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd7ab4560 a2=0 a3=ffff976206c0 items=0 ppid=2422 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:15:34.740000 audit[2499]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.740000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc65ff5a0 a2=0 a3=ffff9f3a46c0 items=0 ppid=2422 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:34.783000 audit[2505]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.783000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd4a11240 a2=0 a3=ffff8fd686c0 items=0 ppid=2422 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:34.787000 audit[2506]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.787000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7098270 a2=0 a3=ffff897b46c0 items=0 ppid=2422 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:15:34.795000 audit[2508]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.795000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc9e14080 a2=0 a3=ffffa8c9d6c0 items=0 ppid=2422 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:34.814846 kubelet[2201]: E1002 19:15:34.814808 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:15:34.846000 audit[2514]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:15:34.846000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4956 a0=3 a1=ffffeb26d680 a2=0 a3=ffffb03996c0 items=0 ppid=2422 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.846000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:34.855874 kubelet[2201]: I1002 19:15:34.855832 2201 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7gpl2" podStartSLOduration=2.419914936 podCreationTimestamp="2023-10-02 19:15:18 +0000 UTC" firstStartedPulling="2023-10-02 19:15:19.675465522 +0000 UTC m=+4.212285411" lastFinishedPulling="2023-10-02 19:15:34.111320251 +0000 UTC m=+18.648140128" observedRunningTime="2023-10-02 19:15:34.855703485 +0000 UTC m=+19.392523374" watchObservedRunningTime="2023-10-02 19:15:34.855769653 +0000 UTC m=+19.392589530" Oct 2 19:15:34.879000 audit[2514]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:15:34.879000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffeb26d680 a2=0 a3=ffffb03996c0 items=0 ppid=2422 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:34.886000 audit[2520]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.886000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcbdd5ee0 a2=0 a3=ffffa5b236c0 items=0 ppid=2422 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:15:34.894000 audit[2522]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.894000 audit[2522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd4d33890 a2=0 a3=ffff9eb536c0 items=0 ppid=2422 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:15:34.907000 audit[2525]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.907000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd6f07630 a2=0 a3=ffffaec426c0 items=0 ppid=2422 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:15:34.911000 audit[2526]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.911000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0d0ca00 a2=0 a3=ffffb19236c0 items=0 ppid=2422 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:15:34.923000 audit[2528]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.923000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd59a5210 a2=0 a3=ffffad7b36c0 items=0 ppid=2422 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:15:34.927000 audit[2529]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.927000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1967c10 a2=0 a3=ffffb286b6c0 items=0 ppid=2422 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:15:34.935000 audit[2531]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.935000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc98fe5c0 a2=0 a3=ffff816a36c0 items=0 ppid=2422 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:15:34.947000 audit[2534]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.947000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffa3b8080 a2=0 a3=ffffa3bc66c0 items=0 ppid=2422 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:15:34.953000 audit[2535]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.953000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd557ce80 a2=0 a3=ffff82aeb6c0 items=0 ppid=2422 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:15:34.961000 audit[2537]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.961000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf417390 a2=0 a3=ffff8ab6f6c0 items=0 ppid=2422 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:15:34.965000 audit[2538]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.965000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd625ead0 a2=0 a3=ffffa81be6c0 items=0 ppid=2422 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:15:34.974000 audit[2540]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.974000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffc1f48e0 a2=0 a3=ffffbad096c0 items=0 ppid=2422 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:34.987000 audit[2543]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.987000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffec4aaf0 a2=0 a3=ffff9c5f06c0 items=0 ppid=2422 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:15:34.999000 audit[2546]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.999000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9653b20 a2=0 a3=ffffa37d96c0 items=0 ppid=2422 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:15:35.003000 audit[2547]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.003000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0bf76a0 a2=0 a3=ffffa4ad86c0 items=0 ppid=2422 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:15:35.010000 audit[2549]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.010000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd6dbf9c0 a2=0 a3=ffff84beb6c0 items=0 ppid=2422 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:35.022000 audit[2552]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.022000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff82f6980 a2=0 a3=ffff974a36c0 items=0 ppid=2422 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:35.027000 audit[2553]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.027000 audit[2553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd02cc960 a2=0 a3=ffff9fdfa6c0 items=0 ppid=2422 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:15:35.035000 audit[2555]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.035000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc3be84c0 a2=0 a3=ffffaab836c0 items=0 ppid=2422 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:35.039000 audit[2556]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.039000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda3f2360 a2=0 a3=ffffa01086c0 items=0 ppid=2422 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:15:35.047000 audit[2558]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.047000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdd3f0320 a2=0 a3=ffff9eb406c0 items=0 ppid=2422 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:35.059000 audit[2561]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.059000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe32b9120 a2=0 a3=ffffbb0f66c0 items=0 ppid=2422 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:35.068000 audit[2563]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:15:35.068000 audit[2563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffef37d520 a2=0 a3=ffff9f4126c0 items=0 ppid=2422 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.068000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:35.069000 audit[2563]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:15:35.069000 audit[2563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffef37d520 a2=0 a3=ffff9f4126c0 items=0 ppid=2422 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.069000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:35.369391 kubelet[2201]: W1002 19:15:35.369239 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice/cri-containerd-33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81.scope WatchSource:0}: container "33a9d15244b1485b5f52db3b376cbf0400df13e6193431d5e962700ee02a6e81" in namespace "k8s.io": not found Oct 2 19:15:35.489254 kubelet[2201]: E1002 19:15:35.489197 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.313780 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:15:36.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:36.334000 audit: BPF prog-id=60 op=UNLOAD Oct 2 19:15:36.334000 audit: BPF prog-id=59 op=UNLOAD Oct 2 19:15:36.334000 audit: BPF prog-id=58 op=UNLOAD Oct 2 19:15:36.475338 kubelet[2201]: E1002 19:15:36.475296 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.489664 kubelet[2201]: E1002 19:15:36.489603 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:37.490713 kubelet[2201]: E1002 19:15:37.490648 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.481509 kubelet[2201]: W1002 19:15:38.481433 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice/cri-containerd-066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea.scope WatchSource:0}: task 066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea not found: not found Oct 2 19:15:38.492992 kubelet[2201]: E1002 19:15:38.492956 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:39.494133 kubelet[2201]: E1002 19:15:39.494069 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:40.494822 kubelet[2201]: E1002 19:15:40.494778 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:41.496268 kubelet[2201]: E1002 19:15:41.496202 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:42.497341 kubelet[2201]: E1002 19:15:42.497307 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:43.498738 kubelet[2201]: E1002 19:15:43.498675 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:44.499324 kubelet[2201]: E1002 19:15:44.499281 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:45.500907 kubelet[2201]: E1002 19:15:45.500840 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:45.758748 env[1736]: time="2023-10-02T19:15:45.758423070Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:15:45.787762 env[1736]: time="2023-10-02T19:15:45.787661857Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\"" Oct 2 19:15:45.789217 env[1736]: time="2023-10-02T19:15:45.789116539Z" level=info msg="StartContainer for \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\"" Oct 2 19:15:45.833648 systemd[1]: Started cri-containerd-be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a.scope. Oct 2 19:15:45.875963 systemd[1]: cri-containerd-be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a.scope: Deactivated successfully. Oct 2 19:15:46.098219 env[1736]: time="2023-10-02T19:15:46.098038284Z" level=info msg="shim disconnected" id=be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a Oct 2 19:15:46.098914 env[1736]: time="2023-10-02T19:15:46.098860832Z" level=warning msg="cleaning up after shim disconnected" id=be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a namespace=k8s.io Oct 2 19:15:46.099069 env[1736]: time="2023-10-02T19:15:46.099040444Z" level=info msg="cleaning up dead shim" Oct 2 19:15:46.125116 env[1736]: time="2023-10-02T19:15:46.125050858Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2590 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:46.125878 env[1736]: time="2023-10-02T19:15:46.125798239Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Oct 2 19:15:46.127329 env[1736]: time="2023-10-02T19:15:46.127261479Z" level=error msg="Failed to pipe stderr of container \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\"" error="reading from a closed fifo" Oct 2 19:15:46.127442 env[1736]: time="2023-10-02T19:15:46.127362709Z" level=error msg="Failed to pipe stdout of container \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\"" error="reading from a closed fifo" Oct 2 19:15:46.129702 env[1736]: time="2023-10-02T19:15:46.129624244Z" level=error msg="StartContainer for \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:46.130004 kubelet[2201]: E1002 19:15:46.129951 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a" Oct 2 19:15:46.130190 kubelet[2201]: E1002 19:15:46.130110 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:46.130190 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:46.130190 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:15:46.130190 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6d4sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:46.130480 kubelet[2201]: E1002 19:15:46.130216 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:15:46.501034 kubelet[2201]: E1002 19:15:46.500983 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:46.774935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a-rootfs.mount: Deactivated successfully. Oct 2 19:15:46.861173 kubelet[2201]: I1002 19:15:46.861105 2201 scope.go:117] "RemoveContainer" containerID="066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea" Oct 2 19:15:46.861824 kubelet[2201]: I1002 19:15:46.861772 2201 scope.go:117] "RemoveContainer" containerID="066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea" Oct 2 19:15:46.863937 env[1736]: time="2023-10-02T19:15:46.863886508Z" level=info msg="RemoveContainer for \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\"" Oct 2 19:15:46.865936 env[1736]: time="2023-10-02T19:15:46.865837910Z" level=info msg="RemoveContainer for \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\"" Oct 2 19:15:46.866196 env[1736]: time="2023-10-02T19:15:46.866096745Z" level=error msg="RemoveContainer for \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\" failed" error="failed to set removing state for container \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\": container is already in removing state" Oct 2 19:15:46.867798 kubelet[2201]: E1002 19:15:46.867763 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\": container is already in removing state" containerID="066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea" Oct 2 19:15:46.868243 kubelet[2201]: E1002 19:15:46.868099 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea": container is already in removing state; Skipping pod "cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)" Oct 2 19:15:46.869185 kubelet[2201]: E1002 19:15:46.869112 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:15:46.870451 env[1736]: time="2023-10-02T19:15:46.870398984Z" level=info msg="RemoveContainer for \"066b42fd571b96e4c6b4be668d262594a7dfe5908f0580fe06939bb6ff7d53ea\" returns successfully" Oct 2 19:15:47.501778 kubelet[2201]: E1002 19:15:47.501743 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:48.503376 kubelet[2201]: E1002 19:15:48.503335 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:49.203422 kubelet[2201]: W1002 19:15:49.203365 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice/cri-containerd-be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a.scope WatchSource:0}: task be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a not found: not found Oct 2 19:15:49.504556 kubelet[2201]: E1002 19:15:49.504245 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:50.505103 kubelet[2201]: E1002 19:15:50.505055 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:51.414882 update_engine[1728]: I1002 19:15:51.414311 1728 update_attempter.cc:505] Updating boot flags... Oct 2 19:15:51.506893 kubelet[2201]: E1002 19:15:51.506779 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:52.507535 kubelet[2201]: E1002 19:15:52.507479 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:53.507864 kubelet[2201]: E1002 19:15:53.507824 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:54.509668 kubelet[2201]: E1002 19:15:54.509598 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:55.510654 kubelet[2201]: E1002 19:15:55.510599 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:56.475399 kubelet[2201]: E1002 19:15:56.475359 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:56.511439 kubelet[2201]: E1002 19:15:56.511405 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:57.512685 kubelet[2201]: E1002 19:15:57.512618 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:58.513368 kubelet[2201]: E1002 19:15:58.513303 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:59.514198 kubelet[2201]: E1002 19:15:59.514152 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:59.755706 kubelet[2201]: E1002 19:15:59.755653 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:00.515386 kubelet[2201]: E1002 19:16:00.515325 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:01.515819 kubelet[2201]: E1002 19:16:01.515784 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:02.517469 kubelet[2201]: E1002 19:16:02.517405 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:03.518645 kubelet[2201]: E1002 19:16:03.518599 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:04.519794 kubelet[2201]: E1002 19:16:04.519736 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:05.520486 kubelet[2201]: E1002 19:16:05.520448 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:06.521337 kubelet[2201]: E1002 19:16:06.521263 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:07.522351 kubelet[2201]: E1002 19:16:07.522310 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:08.523168 kubelet[2201]: E1002 19:16:08.523093 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:09.524507 kubelet[2201]: E1002 19:16:09.524470 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:10.525555 kubelet[2201]: E1002 19:16:10.525510 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:11.526949 kubelet[2201]: E1002 19:16:11.526901 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:11.758437 env[1736]: time="2023-10-02T19:16:11.758224495Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:16:11.777881 env[1736]: time="2023-10-02T19:16:11.777273057Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\"" Oct 2 19:16:11.779213 env[1736]: time="2023-10-02T19:16:11.778365362Z" level=info msg="StartContainer for \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\"" Oct 2 19:16:11.830568 systemd[1]: run-containerd-runc-k8s.io-b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957-runc.84nqQR.mount: Deactivated successfully. Oct 2 19:16:11.840084 systemd[1]: Started cri-containerd-b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957.scope. Oct 2 19:16:11.872652 systemd[1]: cri-containerd-b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957.scope: Deactivated successfully. Oct 2 19:16:11.880890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957-rootfs.mount: Deactivated successfully. Oct 2 19:16:11.900182 env[1736]: time="2023-10-02T19:16:11.900087737Z" level=info msg="shim disconnected" id=b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957 Oct 2 19:16:11.900652 env[1736]: time="2023-10-02T19:16:11.900606230Z" level=warning msg="cleaning up after shim disconnected" id=b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957 namespace=k8s.io Oct 2 19:16:11.900825 env[1736]: time="2023-10-02T19:16:11.900796512Z" level=info msg="cleaning up dead shim" Oct 2 19:16:11.935796 env[1736]: time="2023-10-02T19:16:11.935727292Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2813 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:11Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:11.936530 env[1736]: time="2023-10-02T19:16:11.936446939Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:16:11.937269 env[1736]: time="2023-10-02T19:16:11.937209306Z" level=error msg="Failed to pipe stdout of container \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\"" error="reading from a closed fifo" Oct 2 19:16:11.939271 env[1736]: time="2023-10-02T19:16:11.939214624Z" level=error msg="Failed to pipe stderr of container \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\"" error="reading from a closed fifo" Oct 2 19:16:11.941583 env[1736]: time="2023-10-02T19:16:11.941513520Z" level=error msg="StartContainer for \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:11.942390 kubelet[2201]: E1002 19:16:11.941987 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957" Oct 2 19:16:11.942390 kubelet[2201]: E1002 19:16:11.942267 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:11.942390 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:11.942390 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:16:11.942390 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6d4sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:11.942390 kubelet[2201]: E1002 19:16:11.942353 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:12.528625 kubelet[2201]: E1002 19:16:12.528543 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:12.929783 kubelet[2201]: I1002 19:16:12.929750 2201 scope.go:117] "RemoveContainer" containerID="be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a" Oct 2 19:16:12.930817 kubelet[2201]: I1002 19:16:12.930787 2201 scope.go:117] "RemoveContainer" containerID="be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a" Oct 2 19:16:12.932559 env[1736]: time="2023-10-02T19:16:12.932478927Z" level=info msg="RemoveContainer for \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\"" Oct 2 19:16:12.935510 env[1736]: time="2023-10-02T19:16:12.935439716Z" level=info msg="RemoveContainer for \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\"" Oct 2 19:16:12.936840 env[1736]: time="2023-10-02T19:16:12.935617315Z" level=error msg="RemoveContainer for \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\" failed" error="rpc error: code = NotFound desc = get container info: container \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\" in namespace \"k8s.io\": not found" Oct 2 19:16:12.937526 env[1736]: time="2023-10-02T19:16:12.937355479Z" level=info msg="RemoveContainer for \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\" returns successfully" Oct 2 19:16:12.937877 kubelet[2201]: E1002 19:16:12.937813 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a\" in namespace \"k8s.io\": not found" containerID="be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a" Oct 2 19:16:12.937877 kubelet[2201]: E1002 19:16:12.937871 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "be58a3d8366d0f23dfc43aa84d691bd61cbc658a5e294768ce962d7a55c7356a" in namespace "k8s.io": not found; Skipping pod "cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)" Oct 2 19:16:12.938369 kubelet[2201]: E1002 19:16:12.938329 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:13.529623 kubelet[2201]: E1002 19:16:13.529548 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:14.529942 kubelet[2201]: E1002 19:16:14.529879 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:15.005011 kubelet[2201]: W1002 19:16:15.004943 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice/cri-containerd-b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957.scope WatchSource:0}: task b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957 not found: not found Oct 2 19:16:15.530835 kubelet[2201]: E1002 19:16:15.530762 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:16.475037 kubelet[2201]: E1002 19:16:16.474999 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:16.530998 kubelet[2201]: E1002 19:16:16.530957 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:17.532390 kubelet[2201]: E1002 19:16:17.532325 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:18.533540 kubelet[2201]: E1002 19:16:18.533503 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:19.534378 kubelet[2201]: E1002 19:16:19.534328 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:20.535014 kubelet[2201]: E1002 19:16:20.534957 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:21.535795 kubelet[2201]: E1002 19:16:21.535726 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:22.536347 kubelet[2201]: E1002 19:16:22.536290 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:23.538036 kubelet[2201]: E1002 19:16:23.537970 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:24.539751 kubelet[2201]: E1002 19:16:24.539711 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:25.540735 kubelet[2201]: E1002 19:16:25.540671 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:26.541740 kubelet[2201]: E1002 19:16:26.541696 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:27.542989 kubelet[2201]: E1002 19:16:27.542932 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:27.755045 kubelet[2201]: E1002 19:16:27.754993 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:28.543094 kubelet[2201]: E1002 19:16:28.543048 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:29.544059 kubelet[2201]: E1002 19:16:29.543992 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:30.544436 kubelet[2201]: E1002 19:16:30.544394 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:31.545932 kubelet[2201]: E1002 19:16:31.545688 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:32.547367 kubelet[2201]: E1002 19:16:32.547306 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:33.548480 kubelet[2201]: E1002 19:16:33.548410 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:34.549029 kubelet[2201]: E1002 19:16:34.548972 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:35.550695 kubelet[2201]: E1002 19:16:35.550633 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:36.474558 kubelet[2201]: E1002 19:16:36.474469 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:36.550886 kubelet[2201]: E1002 19:16:36.550831 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:37.552313 kubelet[2201]: E1002 19:16:37.552251 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:38.553453 kubelet[2201]: E1002 19:16:38.553379 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:39.554566 kubelet[2201]: E1002 19:16:39.554527 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:40.555826 kubelet[2201]: E1002 19:16:40.555785 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:41.556531 kubelet[2201]: E1002 19:16:41.556498 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:42.557268 kubelet[2201]: E1002 19:16:42.557221 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:42.758150 kubelet[2201]: E1002 19:16:42.758066 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:43.558879 kubelet[2201]: E1002 19:16:43.558838 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:44.559755 kubelet[2201]: E1002 19:16:44.559716 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:45.561090 kubelet[2201]: E1002 19:16:45.561053 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:46.562199 kubelet[2201]: E1002 19:16:46.562099 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:47.562966 kubelet[2201]: E1002 19:16:47.562923 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:48.563750 kubelet[2201]: E1002 19:16:48.563712 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:49.564645 kubelet[2201]: E1002 19:16:49.564588 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:50.565470 kubelet[2201]: E1002 19:16:50.565415 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:51.566597 kubelet[2201]: E1002 19:16:51.566525 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:52.567185 kubelet[2201]: E1002 19:16:52.567103 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.567740 kubelet[2201]: E1002 19:16:53.567697 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:54.569111 kubelet[2201]: E1002 19:16:54.569049 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:54.761263 env[1736]: time="2023-10-02T19:16:54.761189796Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:16:54.777191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846863393.mount: Deactivated successfully. Oct 2 19:16:54.790142 env[1736]: time="2023-10-02T19:16:54.790056333Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\"" Oct 2 19:16:54.791455 env[1736]: time="2023-10-02T19:16:54.791359804Z" level=info msg="StartContainer for \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\"" Oct 2 19:16:54.840153 systemd[1]: Started cri-containerd-e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a.scope. Oct 2 19:16:54.879310 systemd[1]: cri-containerd-e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a.scope: Deactivated successfully. Oct 2 19:16:54.898219 env[1736]: time="2023-10-02T19:16:54.898115487Z" level=info msg="shim disconnected" id=e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a Oct 2 19:16:54.898219 env[1736]: time="2023-10-02T19:16:54.898216300Z" level=warning msg="cleaning up after shim disconnected" id=e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a namespace=k8s.io Oct 2 19:16:54.898587 env[1736]: time="2023-10-02T19:16:54.898239089Z" level=info msg="cleaning up dead shim" Oct 2 19:16:54.924850 env[1736]: time="2023-10-02T19:16:54.924760310Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2855 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:54.925359 env[1736]: time="2023-10-02T19:16:54.925270714Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:16:54.929089 env[1736]: time="2023-10-02T19:16:54.929030467Z" level=error msg="Failed to pipe stderr of container \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\"" error="reading from a closed fifo" Oct 2 19:16:54.929413 env[1736]: time="2023-10-02T19:16:54.929312411Z" level=error msg="Failed to pipe stdout of container \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\"" error="reading from a closed fifo" Oct 2 19:16:54.934176 env[1736]: time="2023-10-02T19:16:54.934088976Z" level=error msg="StartContainer for \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:54.934878 kubelet[2201]: E1002 19:16:54.934618 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a" Oct 2 19:16:54.934878 kubelet[2201]: E1002 19:16:54.934769 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:54.934878 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:54.934878 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:16:54.934878 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6d4sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:54.934878 kubelet[2201]: E1002 19:16:54.934833 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:55.028360 kubelet[2201]: I1002 19:16:55.028148 2201 scope.go:117] "RemoveContainer" containerID="b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957" Oct 2 19:16:55.028915 kubelet[2201]: I1002 19:16:55.028556 2201 scope.go:117] "RemoveContainer" containerID="b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957" Oct 2 19:16:55.031008 env[1736]: time="2023-10-02T19:16:55.030942664Z" level=info msg="RemoveContainer for \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\"" Oct 2 19:16:55.035077 env[1736]: time="2023-10-02T19:16:55.035012116Z" level=info msg="RemoveContainer for \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\"" Oct 2 19:16:55.035295 env[1736]: time="2023-10-02T19:16:55.035174863Z" level=error msg="RemoveContainer for \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\" failed" error="rpc error: code = NotFound desc = get container info: container \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\" in namespace \"k8s.io\": not found" Oct 2 19:16:55.035628 kubelet[2201]: E1002 19:16:55.035594 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\" in namespace \"k8s.io\": not found" containerID="b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957" Oct 2 19:16:55.035800 kubelet[2201]: E1002 19:16:55.035778 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957" in namespace "k8s.io": not found; Skipping pod "cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)" Oct 2 19:16:55.036533 kubelet[2201]: E1002 19:16:55.036504 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:16:55.036676 env[1736]: time="2023-10-02T19:16:55.036613804Z" level=info msg="RemoveContainer for \"b48ba3fddae2c849ed99b00dcff4f0e45bba8e42b9f696a0e5a4e031c934c957\" returns successfully" Oct 2 19:16:55.569717 kubelet[2201]: E1002 19:16:55.569674 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:55.773086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a-rootfs.mount: Deactivated successfully. Oct 2 19:16:56.475160 kubelet[2201]: E1002 19:16:56.475093 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:56.570866 kubelet[2201]: E1002 19:16:56.570801 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:57.571303 kubelet[2201]: E1002 19:16:57.571265 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:58.004034 kubelet[2201]: W1002 19:16:58.003990 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice/cri-containerd-e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a.scope WatchSource:0}: task e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a not found: not found Oct 2 19:16:58.572786 kubelet[2201]: E1002 19:16:58.572748 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:59.574168 kubelet[2201]: E1002 19:16:59.574130 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:00.575363 kubelet[2201]: E1002 19:17:00.575292 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:01.575644 kubelet[2201]: E1002 19:17:01.575599 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:02.576475 kubelet[2201]: E1002 19:17:02.576436 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:03.577601 kubelet[2201]: E1002 19:17:03.577540 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:04.578618 kubelet[2201]: E1002 19:17:04.578585 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:05.579857 kubelet[2201]: E1002 19:17:05.579796 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:05.755515 kubelet[2201]: E1002 19:17:05.755461 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:17:06.580543 kubelet[2201]: E1002 19:17:06.580497 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:07.582084 kubelet[2201]: E1002 19:17:07.582020 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:08.583050 kubelet[2201]: E1002 19:17:08.582980 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:09.584314 kubelet[2201]: E1002 19:17:09.584252 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:10.584901 kubelet[2201]: E1002 19:17:10.584863 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:11.586292 kubelet[2201]: E1002 19:17:11.586247 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:12.587948 kubelet[2201]: E1002 19:17:12.587891 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.432889 update_engine[1728]: I1002 19:17:13.432205 1728 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:17:13.432889 update_engine[1728]: I1002 19:17:13.432258 1728 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:17:13.432889 update_engine[1728]: I1002 19:17:13.432615 1728 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:17:13.433584 update_engine[1728]: I1002 19:17:13.433485 1728 omaha_request_params.cc:62] Current group set to lts Oct 2 19:17:13.433855 update_engine[1728]: I1002 19:17:13.433682 1728 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:17:13.433855 update_engine[1728]: I1002 19:17:13.433704 1728 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:17:13.433855 update_engine[1728]: I1002 19:17:13.433734 1728 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:17:13.433855 update_engine[1728]: I1002 19:17:13.433784 1728 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:17:13.435205 update_engine[1728]: I1002 19:17:13.435035 1728 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:17:13.435205 update_engine[1728]: I1002 19:17:13.435086 1728 omaha_request_action.cc:269] Request: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: Oct 2 19:17:13.435205 update_engine[1728]: I1002 19:17:13.435101 1728 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:17:13.435949 locksmithd[1782]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:17:13.438609 update_engine[1728]: I1002 19:17:13.438553 1728 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:17:13.438863 update_engine[1728]: I1002 19:17:13.438813 1728 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:17:13.588819 kubelet[2201]: E1002 19:17:13.588760 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:14.589565 kubelet[2201]: E1002 19:17:14.589523 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:14.642556 update_engine[1728]: I1002 19:17:14.642490 1728 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:17:14.643057 update_engine[1728]: I1002 19:17:14.642877 1728 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:17:14.643261 update_engine[1728]: I1002 19:17:14.643218 1728 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:17:14.978509 update_engine[1728]: I1002 19:17:14.978448 1728 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:17:14.980587 update_engine[1728]: I1002 19:17:14.980523 1728 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:17:14.980587 update_engine[1728]: I1002 19:17:14.980568 1728 omaha_request_action.cc:619] Omaha request response: Oct 2 19:17:14.980587 update_engine[1728]: Oct 2 19:17:14.988128 update_engine[1728]: I1002 19:17:14.988068 1728 omaha_request_action.cc:409] No update. Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988134 1728 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988150 1728 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988160 1728 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988169 1728 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988178 1728 update_attempter.cc:302] Processing Done. Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988201 1728 update_attempter.cc:338] No update. Oct 2 19:17:14.988272 update_engine[1728]: I1002 19:17:14.988219 1728 update_check_scheduler.cc:74] Next update check in 46m1s Oct 2 19:17:14.988744 locksmithd[1782]: LastCheckedTime=1696274234 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:17:15.590321 kubelet[2201]: E1002 19:17:15.590279 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:16.474797 kubelet[2201]: E1002 19:17:16.474680 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:16.554408 kubelet[2201]: E1002 19:17:16.554334 2201 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:17:16.591832 kubelet[2201]: E1002 19:17:16.591773 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:16.639165 kubelet[2201]: E1002 19:17:16.639069 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:17.592244 kubelet[2201]: E1002 19:17:17.592165 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.592451 kubelet[2201]: E1002 19:17:18.592387 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.755641 kubelet[2201]: E1002 19:17:18.755585 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:17:19.593546 kubelet[2201]: E1002 19:17:19.593474 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:20.593965 kubelet[2201]: E1002 19:17:20.593900 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:21.594713 kubelet[2201]: E1002 19:17:21.594642 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:21.640863 kubelet[2201]: E1002 19:17:21.640808 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:22.595501 kubelet[2201]: E1002 19:17:22.595438 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:23.595816 kubelet[2201]: E1002 19:17:23.595771 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:24.597114 kubelet[2201]: E1002 19:17:24.597050 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:25.597717 kubelet[2201]: E1002 19:17:25.597649 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:26.598471 kubelet[2201]: E1002 19:17:26.598408 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:26.641993 kubelet[2201]: E1002 19:17:26.641946 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:27.599293 kubelet[2201]: E1002 19:17:27.599227 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:28.599976 kubelet[2201]: E1002 19:17:28.599912 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:29.600377 kubelet[2201]: E1002 19:17:29.600313 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:30.600845 kubelet[2201]: E1002 19:17:30.600804 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:30.755623 kubelet[2201]: E1002 19:17:30.755568 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:17:31.602453 kubelet[2201]: E1002 19:17:31.602392 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:31.643510 kubelet[2201]: E1002 19:17:31.643476 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:32.603347 kubelet[2201]: E1002 19:17:32.603278 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:33.604192 kubelet[2201]: E1002 19:17:33.604150 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:34.604975 kubelet[2201]: E1002 19:17:34.604930 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:35.606760 kubelet[2201]: E1002 19:17:35.606691 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:36.474728 kubelet[2201]: E1002 19:17:36.474661 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:36.607839 kubelet[2201]: E1002 19:17:36.607794 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:36.644652 kubelet[2201]: E1002 19:17:36.644610 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:37.608604 kubelet[2201]: E1002 19:17:37.608560 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:38.610162 kubelet[2201]: E1002 19:17:38.610087 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:39.611079 kubelet[2201]: E1002 19:17:39.611034 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:40.612377 kubelet[2201]: E1002 19:17:40.612312 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:41.613201 kubelet[2201]: E1002 19:17:41.613143 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:41.646328 kubelet[2201]: E1002 19:17:41.646277 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:42.613787 kubelet[2201]: E1002 19:17:42.613722 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:43.614408 kubelet[2201]: E1002 19:17:43.614344 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:44.615595 kubelet[2201]: E1002 19:17:44.615522 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:45.616070 kubelet[2201]: E1002 19:17:45.616006 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:45.754878 kubelet[2201]: E1002 19:17:45.754840 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:17:46.616209 kubelet[2201]: E1002 19:17:46.616145 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:46.647031 kubelet[2201]: E1002 19:17:46.646979 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:47.617303 kubelet[2201]: E1002 19:17:47.617257 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:48.618464 kubelet[2201]: E1002 19:17:48.618424 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:49.619721 kubelet[2201]: E1002 19:17:49.619651 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:50.620588 kubelet[2201]: E1002 19:17:50.620491 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:51.621039 kubelet[2201]: E1002 19:17:51.620982 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:51.648001 kubelet[2201]: E1002 19:17:51.647963 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:52.621984 kubelet[2201]: E1002 19:17:52.621935 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:53.623577 kubelet[2201]: E1002 19:17:53.623507 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:54.624234 kubelet[2201]: E1002 19:17:54.624151 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:55.624575 kubelet[2201]: E1002 19:17:55.624508 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:56.475341 kubelet[2201]: E1002 19:17:56.475275 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:56.625370 kubelet[2201]: E1002 19:17:56.625308 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:56.649256 kubelet[2201]: E1002 19:17:56.649201 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:57.626400 kubelet[2201]: E1002 19:17:57.626353 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.627214 kubelet[2201]: E1002 19:17:58.627149 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.755785 kubelet[2201]: E1002 19:17:58.755509 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:17:59.627631 kubelet[2201]: E1002 19:17:59.627566 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:00.628637 kubelet[2201]: E1002 19:18:00.628591 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:01.630159 kubelet[2201]: E1002 19:18:01.630033 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:01.651014 kubelet[2201]: E1002 19:18:01.650970 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:02.630852 kubelet[2201]: E1002 19:18:02.630806 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.632713 kubelet[2201]: E1002 19:18:03.632650 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:04.633593 kubelet[2201]: E1002 19:18:04.633529 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:05.634371 kubelet[2201]: E1002 19:18:05.634328 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:06.635174 kubelet[2201]: E1002 19:18:06.635071 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:06.651757 kubelet[2201]: E1002 19:18:06.651644 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:07.636246 kubelet[2201]: E1002 19:18:07.636180 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:08.637270 kubelet[2201]: E1002 19:18:08.637229 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:09.637990 kubelet[2201]: E1002 19:18:09.637948 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.639499 kubelet[2201]: E1002 19:18:10.639429 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.756966 kubelet[2201]: E1002 19:18:10.756871 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:18:11.640439 kubelet[2201]: E1002 19:18:11.640393 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:11.652950 kubelet[2201]: E1002 19:18:11.652885 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:12.641492 kubelet[2201]: E1002 19:18:12.641447 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.642659 kubelet[2201]: E1002 19:18:13.642609 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:14.644245 kubelet[2201]: E1002 19:18:14.644199 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.646032 kubelet[2201]: E1002 19:18:15.645976 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:16.475538 kubelet[2201]: E1002 19:18:16.475472 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:16.646323 kubelet[2201]: E1002 19:18:16.646264 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:16.653835 kubelet[2201]: E1002 19:18:16.653780 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:17.646840 kubelet[2201]: E1002 19:18:17.646774 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:18.647588 kubelet[2201]: E1002 19:18:18.647547 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:19.648832 kubelet[2201]: E1002 19:18:19.648762 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.648967 kubelet[2201]: E1002 19:18:20.648899 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:21.649115 kubelet[2201]: E1002 19:18:21.649058 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:21.655180 kubelet[2201]: E1002 19:18:21.655141 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:22.649389 kubelet[2201]: E1002 19:18:22.649347 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:22.759106 env[1736]: time="2023-10-02T19:18:22.759004495Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:18:22.780076 env[1736]: time="2023-10-02T19:18:22.780006159Z" level=info msg="CreateContainer within sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\"" Oct 2 19:18:22.781223 env[1736]: time="2023-10-02T19:18:22.781173856Z" level=info msg="StartContainer for \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\"" Oct 2 19:18:22.837340 systemd[1]: run-containerd-runc-k8s.io-715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea-runc.RW9Y4V.mount: Deactivated successfully. Oct 2 19:18:22.842683 systemd[1]: Started cri-containerd-715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea.scope. Oct 2 19:18:22.873608 systemd[1]: cri-containerd-715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea.scope: Deactivated successfully. Oct 2 19:18:22.881347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea-rootfs.mount: Deactivated successfully. Oct 2 19:18:22.893018 env[1736]: time="2023-10-02T19:18:22.892951175Z" level=info msg="shim disconnected" id=715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea Oct 2 19:18:22.893504 env[1736]: time="2023-10-02T19:18:22.893459783Z" level=warning msg="cleaning up after shim disconnected" id=715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea namespace=k8s.io Oct 2 19:18:22.893645 env[1736]: time="2023-10-02T19:18:22.893617295Z" level=info msg="cleaning up dead shim" Oct 2 19:18:22.920227 env[1736]: time="2023-10-02T19:18:22.919336689Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2901 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:22.920913 env[1736]: time="2023-10-02T19:18:22.920814273Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:18:22.921804 env[1736]: time="2023-10-02T19:18:22.921458817Z" level=error msg="Failed to pipe stderr of container \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\"" error="reading from a closed fifo" Oct 2 19:18:22.922040 env[1736]: time="2023-10-02T19:18:22.921465141Z" level=error msg="Failed to pipe stdout of container \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\"" error="reading from a closed fifo" Oct 2 19:18:22.923747 env[1736]: time="2023-10-02T19:18:22.923683234Z" level=error msg="StartContainer for \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:22.924217 kubelet[2201]: E1002 19:18:22.924163 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea" Oct 2 19:18:22.924378 kubelet[2201]: E1002 19:18:22.924315 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:22.924378 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:22.924378 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:18:22.924378 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6d4sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:22.924666 kubelet[2201]: E1002 19:18:22.924381 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:18:23.212751 kubelet[2201]: I1002 19:18:23.212051 2201 scope.go:117] "RemoveContainer" containerID="e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a" Oct 2 19:18:23.213151 kubelet[2201]: I1002 19:18:23.213098 2201 scope.go:117] "RemoveContainer" containerID="e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a" Oct 2 19:18:23.215716 env[1736]: time="2023-10-02T19:18:23.215649809Z" level=info msg="RemoveContainer for \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\"" Oct 2 19:18:23.217676 env[1736]: time="2023-10-02T19:18:23.217621026Z" level=info msg="RemoveContainer for \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\"" Oct 2 19:18:23.218020 env[1736]: time="2023-10-02T19:18:23.217969662Z" level=error msg="RemoveContainer for \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\" failed" error="failed to set removing state for container \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\": container is already in removing state" Oct 2 19:18:23.218746 kubelet[2201]: E1002 19:18:23.218716 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\": container is already in removing state" containerID="e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a" Oct 2 19:18:23.219076 kubelet[2201]: E1002 19:18:23.219032 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a": container is already in removing state; Skipping pod "cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)" Oct 2 19:18:23.219811 kubelet[2201]: E1002 19:18:23.219775 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-567xl_kube-system(151deab3-835b-44f8-842a-07e41c18fb22)\"" pod="kube-system/cilium-567xl" podUID="151deab3-835b-44f8-842a-07e41c18fb22" Oct 2 19:18:23.221929 env[1736]: time="2023-10-02T19:18:23.221876335Z" level=info msg="RemoveContainer for \"e70763f17859cd8ea76b854d2bc6f621475fa76aaf6ce7120d779c35480aee6a\" returns successfully" Oct 2 19:18:23.650563 kubelet[2201]: E1002 19:18:23.650433 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:24.652101 kubelet[2201]: E1002 19:18:24.652035 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:25.652823 kubelet[2201]: E1002 19:18:25.652779 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:26.000078 kubelet[2201]: W1002 19:18:26.000032 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice/cri-containerd-715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea.scope WatchSource:0}: task 715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea not found: not found Oct 2 19:18:26.654394 kubelet[2201]: E1002 19:18:26.654347 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:26.655847 kubelet[2201]: E1002 19:18:26.655794 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:27.655182 kubelet[2201]: E1002 19:18:27.655111 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:28.655504 kubelet[2201]: E1002 19:18:28.655435 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:29.655675 kubelet[2201]: E1002 19:18:29.655630 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:30.657243 kubelet[2201]: E1002 19:18:30.657169 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:31.657359 kubelet[2201]: E1002 19:18:31.657285 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:31.657359 kubelet[2201]: E1002 19:18:31.657329 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:32.658089 kubelet[2201]: E1002 19:18:32.658025 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.658700 kubelet[2201]: E1002 19:18:33.658625 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:34.334439 env[1736]: time="2023-10-02T19:18:34.334363585Z" level=info msg="StopPodSandbox for \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\"" Oct 2 19:18:34.335195 env[1736]: time="2023-10-02T19:18:34.335080381Z" level=info msg="Container to stop \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:18:34.337637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269-shm.mount: Deactivated successfully. Oct 2 19:18:34.356874 systemd[1]: cri-containerd-1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269.scope: Deactivated successfully. Oct 2 19:18:34.356000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:18:34.360498 kernel: kauditd_printk_skb: 190 callbacks suppressed Oct 2 19:18:34.360640 kernel: audit: type=1334 audit(1696274314.356:709): prog-id=70 op=UNLOAD Oct 2 19:18:34.362000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:18:34.368416 kernel: audit: type=1334 audit(1696274314.362:710): prog-id=76 op=UNLOAD Oct 2 19:18:34.407510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269-rootfs.mount: Deactivated successfully. Oct 2 19:18:34.422862 env[1736]: time="2023-10-02T19:18:34.422787128Z" level=info msg="shim disconnected" id=1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269 Oct 2 19:18:34.423189 env[1736]: time="2023-10-02T19:18:34.422861300Z" level=warning msg="cleaning up after shim disconnected" id=1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269 namespace=k8s.io Oct 2 19:18:34.423189 env[1736]: time="2023-10-02T19:18:34.422884160Z" level=info msg="cleaning up dead shim" Oct 2 19:18:34.449277 env[1736]: time="2023-10-02T19:18:34.449211826Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2931 runtime=io.containerd.runc.v2\n" Oct 2 19:18:34.449846 env[1736]: time="2023-10-02T19:18:34.449797006Z" level=info msg="TearDown network for sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" successfully" Oct 2 19:18:34.450071 env[1736]: time="2023-10-02T19:18:34.449845438Z" level=info msg="StopPodSandbox for \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" returns successfully" Oct 2 19:18:34.557682 kubelet[2201]: I1002 19:18:34.557641 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/151deab3-835b-44f8-842a-07e41c18fb22-cilium-config-path\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.557973 kubelet[2201]: I1002 19:18:34.557948 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-run\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.558149 kubelet[2201]: I1002 19:18:34.558106 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-lib-modules\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.558308 kubelet[2201]: I1002 19:18:34.558286 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-bpf-maps\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.558473 kubelet[2201]: I1002 19:18:34.558452 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-cgroup\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.558618 kubelet[2201]: I1002 19:18:34.558597 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cni-path\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.558766 kubelet[2201]: I1002 19:18:34.558743 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d4sp\" (UniqueName: \"kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-kube-api-access-6d4sp\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.559415 kubelet[2201]: I1002 19:18:34.559385 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/151deab3-835b-44f8-842a-07e41c18fb22-clustermesh-secrets\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.559601 kubelet[2201]: I1002 19:18:34.559580 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-hostproc\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.559760 kubelet[2201]: I1002 19:18:34.559739 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-xtables-lock\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.559902 kubelet[2201]: I1002 19:18:34.559882 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-net\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.560052 kubelet[2201]: I1002 19:18:34.560031 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-hubble-tls\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.564376 kubelet[2201]: I1002 19:18:34.564322 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-etc-cni-netd\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.564545 kubelet[2201]: I1002 19:18:34.564392 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-kernel\") pod \"151deab3-835b-44f8-842a-07e41c18fb22\" (UID: \"151deab3-835b-44f8-842a-07e41c18fb22\") " Oct 2 19:18:34.564545 kubelet[2201]: I1002 19:18:34.564457 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564545 kubelet[2201]: I1002 19:18:34.560567 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564545 kubelet[2201]: I1002 19:18:34.560600 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564545 kubelet[2201]: I1002 19:18:34.560627 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564545 kubelet[2201]: I1002 19:18:34.560655 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cni-path" (OuterVolumeSpecName: "cni-path") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564950 kubelet[2201]: I1002 19:18:34.559322 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564950 kubelet[2201]: I1002 19:18:34.564583 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564950 kubelet[2201]: I1002 19:18:34.564636 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564950 kubelet[2201]: I1002 19:18:34.564675 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-hostproc" (OuterVolumeSpecName: "hostproc") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.564950 kubelet[2201]: I1002 19:18:34.564720 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:34.572588 kubelet[2201]: I1002 19:18:34.572528 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/151deab3-835b-44f8-842a-07e41c18fb22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:18:34.577405 systemd[1]: var-lib-kubelet-pods-151deab3\x2d835b\x2d44f8\x2d842a\x2d07e41c18fb22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6d4sp.mount: Deactivated successfully. Oct 2 19:18:34.581199 systemd[1]: var-lib-kubelet-pods-151deab3\x2d835b\x2d44f8\x2d842a\x2d07e41c18fb22-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:18:34.582733 kubelet[2201]: I1002 19:18:34.582233 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-kube-api-access-6d4sp" (OuterVolumeSpecName: "kube-api-access-6d4sp") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "kube-api-access-6d4sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:34.588428 kubelet[2201]: I1002 19:18:34.584897 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:34.587391 systemd[1]: var-lib-kubelet-pods-151deab3\x2d835b\x2d44f8\x2d842a\x2d07e41c18fb22-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:18:34.589888 kubelet[2201]: I1002 19:18:34.589810 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/151deab3-835b-44f8-842a-07e41c18fb22-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "151deab3-835b-44f8-842a-07e41c18fb22" (UID: "151deab3-835b-44f8-842a-07e41c18fb22"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:18:34.659067 kubelet[2201]: E1002 19:18:34.658997 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:34.665360 kubelet[2201]: I1002 19:18:34.665316 2201 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/151deab3-835b-44f8-842a-07e41c18fb22-clustermesh-secrets\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665457 kubelet[2201]: I1002 19:18:34.665363 2201 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-hostproc\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665457 kubelet[2201]: I1002 19:18:34.665390 2201 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cni-path\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665457 kubelet[2201]: I1002 19:18:34.665415 2201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6d4sp\" (UniqueName: \"kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-kube-api-access-6d4sp\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665457 kubelet[2201]: I1002 19:18:34.665437 2201 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-etc-cni-netd\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665464 2201 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-kernel\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665518 2201 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-xtables-lock\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665544 2201 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-host-proc-sys-net\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665568 2201 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/151deab3-835b-44f8-842a-07e41c18fb22-hubble-tls\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665590 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/151deab3-835b-44f8-842a-07e41c18fb22-cilium-config-path\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665613 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-run\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665637 2201 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-lib-modules\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665660 2201 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-bpf-maps\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.665696 kubelet[2201]: I1002 19:18:34.665683 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/151deab3-835b-44f8-842a-07e41c18fb22-cilium-cgroup\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:18:34.766214 systemd[1]: Removed slice kubepods-burstable-pod151deab3_835b_44f8_842a_07e41c18fb22.slice. Oct 2 19:18:35.246079 kubelet[2201]: I1002 19:18:35.244856 2201 scope.go:117] "RemoveContainer" containerID="715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea" Oct 2 19:18:35.249000 env[1736]: time="2023-10-02T19:18:35.248490675Z" level=info msg="RemoveContainer for \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\"" Oct 2 19:18:35.252495 env[1736]: time="2023-10-02T19:18:35.252441459Z" level=info msg="RemoveContainer for \"715570aab5de0cb1379dbfa2fad4b69f4440f0e450cd2e9706c4940febca62ea\" returns successfully" Oct 2 19:18:35.660108 kubelet[2201]: E1002 19:18:35.659949 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:36.475090 kubelet[2201]: E1002 19:18:36.475026 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:36.658493 kubelet[2201]: E1002 19:18:36.658438 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:36.661072 kubelet[2201]: E1002 19:18:36.661029 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:36.762338 kubelet[2201]: I1002 19:18:36.762217 2201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="151deab3-835b-44f8-842a-07e41c18fb22" path="/var/lib/kubelet/pods/151deab3-835b-44f8-842a-07e41c18fb22/volumes" Oct 2 19:18:37.661995 kubelet[2201]: E1002 19:18:37.661923 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:38.275493 kubelet[2201]: I1002 19:18:38.275435 2201 topology_manager.go:215] "Topology Admit Handler" podUID="a4565b06-d10d-4d8e-a28e-b41e49f8343a" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-ckw7d" Oct 2 19:18:38.275657 kubelet[2201]: E1002 19:18:38.275527 2201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275657 kubelet[2201]: E1002 19:18:38.275552 2201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275657 kubelet[2201]: E1002 19:18:38.275596 2201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275657 kubelet[2201]: E1002 19:18:38.275616 2201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275941 kubelet[2201]: I1002 19:18:38.275669 2201 memory_manager.go:346] "RemoveStaleState removing state" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275941 kubelet[2201]: I1002 19:18:38.275690 2201 memory_manager.go:346] "RemoveStaleState removing state" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275941 kubelet[2201]: I1002 19:18:38.275706 2201 memory_manager.go:346] "RemoveStaleState removing state" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275941 kubelet[2201]: I1002 19:18:38.275723 2201 memory_manager.go:346] "RemoveStaleState removing state" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.275941 kubelet[2201]: I1002 19:18:38.275760 2201 memory_manager.go:346] "RemoveStaleState removing state" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.284960 systemd[1]: Created slice kubepods-besteffort-poda4565b06_d10d_4d8e_a28e_b41e49f8343a.slice. Oct 2 19:18:38.286759 kubelet[2201]: I1002 19:18:38.286614 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4565b06-d10d-4d8e-a28e-b41e49f8343a-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-ckw7d\" (UID: \"a4565b06-d10d-4d8e-a28e-b41e49f8343a\") " pod="kube-system/cilium-operator-6bc8ccdb58-ckw7d" Oct 2 19:18:38.288074 kubelet[2201]: I1002 19:18:38.288045 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qr65\" (UniqueName: \"kubernetes.io/projected/a4565b06-d10d-4d8e-a28e-b41e49f8343a-kube-api-access-5qr65\") pod \"cilium-operator-6bc8ccdb58-ckw7d\" (UID: \"a4565b06-d10d-4d8e-a28e-b41e49f8343a\") " pod="kube-system/cilium-operator-6bc8ccdb58-ckw7d" Oct 2 19:18:38.482725 kubelet[2201]: I1002 19:18:38.482675 2201 topology_manager.go:215] "Topology Admit Handler" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" podNamespace="kube-system" podName="cilium-g9zw8" Oct 2 19:18:38.482899 kubelet[2201]: E1002 19:18:38.482751 2201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.482899 kubelet[2201]: E1002 19:18:38.482775 2201 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.482899 kubelet[2201]: I1002 19:18:38.482813 2201 memory_manager.go:346] "RemoveStaleState removing state" podUID="151deab3-835b-44f8-842a-07e41c18fb22" containerName="mount-cgroup" Oct 2 19:18:38.492540 systemd[1]: Created slice kubepods-burstable-poda054447d_2a60_44bf_9c27_9351e1dac17f.slice. Oct 2 19:18:38.590251 kubelet[2201]: I1002 19:18:38.590105 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-run\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.590483 kubelet[2201]: I1002 19:18:38.590457 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-hostproc\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.590715 kubelet[2201]: I1002 19:18:38.590692 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-lib-modules\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.590924 kubelet[2201]: I1002 19:18:38.590900 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-ipsec-secrets\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.591136 kubelet[2201]: I1002 19:18:38.591098 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-kernel\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.591304 kubelet[2201]: I1002 19:18:38.591282 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cni-path\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.591451 kubelet[2201]: I1002 19:18:38.591430 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-etc-cni-netd\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.591620 kubelet[2201]: I1002 19:18:38.591599 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-xtables-lock\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.591769 kubelet[2201]: I1002 19:18:38.591749 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jprz\" (UniqueName: \"kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-kube-api-access-4jprz\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.591917 kubelet[2201]: I1002 19:18:38.591896 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-bpf-maps\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.592062 kubelet[2201]: I1002 19:18:38.592041 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-cgroup\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.592237 kubelet[2201]: I1002 19:18:38.592214 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-clustermesh-secrets\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.592335 env[1736]: time="2023-10-02T19:18:38.592243456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-ckw7d,Uid:a4565b06-d10d-4d8e-a28e-b41e49f8343a,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:38.593060 kubelet[2201]: I1002 19:18:38.593025 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-net\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.593400 kubelet[2201]: I1002 19:18:38.593316 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-hubble-tls\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.593765 kubelet[2201]: I1002 19:18:38.593738 2201 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-config-path\") pod \"cilium-g9zw8\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " pod="kube-system/cilium-g9zw8" Oct 2 19:18:38.631061 env[1736]: time="2023-10-02T19:18:38.630932331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:38.631061 env[1736]: time="2023-10-02T19:18:38.631013883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:38.631447 env[1736]: time="2023-10-02T19:18:38.631370523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:38.632001 env[1736]: time="2023-10-02T19:18:38.631913055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2 pid=2957 runtime=io.containerd.runc.v2 Oct 2 19:18:38.662632 kubelet[2201]: E1002 19:18:38.662529 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:38.667606 systemd[1]: Started cri-containerd-a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2.scope. Oct 2 19:18:38.680460 systemd[1]: run-containerd-runc-k8s.io-a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2-runc.vXp8Yy.mount: Deactivated successfully. Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.736979 kernel: audit: type=1400 audit(1696274318.718:711): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.737212 kernel: audit: type=1400 audit(1696274318.718:712): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.748166 kernel: audit: type=1400 audit(1696274318.718:713): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.761094 kernel: audit: type=1400 audit(1696274318.718:714): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.774640 kernel: audit: type=1400 audit(1696274318.718:715): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.788523 kernel: audit: type=1400 audit(1696274318.718:716): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.801062 kernel: audit: type=1400 audit(1696274318.718:717): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.809537 kernel: audit: type=1400 audit(1696274318.718:718): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.718000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.726000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.811577 env[1736]: time="2023-10-02T19:18:38.811368531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-ckw7d,Uid:a4565b06-d10d-4d8e-a28e-b41e49f8343a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\"" Oct 2 19:18:38.726000 audit: BPF prog-id=81 op=LOAD Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2957 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373762663566643439626139336463393031616138346662626661 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2957 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373762663566643439626139336463393031616138346662626661 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit: BPF prog-id=82 op=LOAD Oct 2 19:18:38.727000 audit[2966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2957 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373762663566643439626139336463393031616138346662626661 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit: BPF prog-id=83 op=LOAD Oct 2 19:18:38.727000 audit[2966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2957 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373762663566643439626139336463393031616138346662626661 Oct 2 19:18:38.727000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:18:38.727000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { perfmon } for pid=2966 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit[2966]: AVC avc: denied { bpf } for pid=2966 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:38.727000 audit: BPF prog-id=84 op=LOAD Oct 2 19:18:38.727000 audit[2966]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2957 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:38.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132373762663566643439626139336463393031616138346662626661 Oct 2 19:18:38.816170 env[1736]: time="2023-10-02T19:18:38.814517739Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:18:39.102734 env[1736]: time="2023-10-02T19:18:39.102658245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9zw8,Uid:a054447d-2a60-44bf-9c27-9351e1dac17f,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:39.135023 env[1736]: time="2023-10-02T19:18:39.134889320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:39.135023 env[1736]: time="2023-10-02T19:18:39.134973140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:39.135364 env[1736]: time="2023-10-02T19:18:39.135294356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:39.135880 env[1736]: time="2023-10-02T19:18:39.135751316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6 pid=3001 runtime=io.containerd.runc.v2 Oct 2 19:18:39.165817 systemd[1]: Started cri-containerd-4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6.scope. Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.203000 audit: BPF prog-id=85 op=LOAD Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=3001 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:39.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466656164623733646261306638343338386661643966623935653462 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=3001 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:39.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466656164623733646261306638343338386661643966623935653462 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit: BPF prog-id=86 op=LOAD Oct 2 19:18:39.204000 audit[3011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=3001 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:39.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466656164623733646261306638343338386661643966623935653462 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.204000 audit: BPF prog-id=87 op=LOAD Oct 2 19:18:39.204000 audit[3011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=3001 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:39.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466656164623733646261306638343338386661643966623935653462 Oct 2 19:18:39.204000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:18:39.204000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { perfmon } for pid=3011 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit[3011]: AVC avc: denied { bpf } for pid=3011 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:39.205000 audit: BPF prog-id=88 op=LOAD Oct 2 19:18:39.205000 audit[3011]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=3001 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:39.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466656164623733646261306638343338386661643966623935653462 Oct 2 19:18:39.233050 env[1736]: time="2023-10-02T19:18:39.232979178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9zw8,Uid:a054447d-2a60-44bf-9c27-9351e1dac17f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\"" Oct 2 19:18:39.238626 env[1736]: time="2023-10-02T19:18:39.238572978Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:18:39.270953 env[1736]: time="2023-10-02T19:18:39.270869177Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\"" Oct 2 19:18:39.271906 env[1736]: time="2023-10-02T19:18:39.271856669Z" level=info msg="StartContainer for \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\"" Oct 2 19:18:39.315186 systemd[1]: Started cri-containerd-1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e.scope. Oct 2 19:18:39.352686 systemd[1]: cri-containerd-1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e.scope: Deactivated successfully. Oct 2 19:18:39.388400 env[1736]: time="2023-10-02T19:18:39.388310007Z" level=info msg="shim disconnected" id=1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e Oct 2 19:18:39.388400 env[1736]: time="2023-10-02T19:18:39.388389039Z" level=warning msg="cleaning up after shim disconnected" id=1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e namespace=k8s.io Oct 2 19:18:39.388750 env[1736]: time="2023-10-02T19:18:39.388412043Z" level=info msg="cleaning up dead shim" Oct 2 19:18:39.416878 env[1736]: time="2023-10-02T19:18:39.416780103Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3059 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:39.417381 env[1736]: time="2023-10-02T19:18:39.417295491Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:18:39.417811 env[1736]: time="2023-10-02T19:18:39.417755199Z" level=error msg="Failed to pipe stdout of container \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\"" error="reading from a closed fifo" Oct 2 19:18:39.417942 env[1736]: time="2023-10-02T19:18:39.417893139Z" level=error msg="Failed to pipe stderr of container \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\"" error="reading from a closed fifo" Oct 2 19:18:39.420142 env[1736]: time="2023-10-02T19:18:39.420052610Z" level=error msg="StartContainer for \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:39.420519 kubelet[2201]: E1002 19:18:39.420471 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e" Oct 2 19:18:39.420695 kubelet[2201]: E1002 19:18:39.420627 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:39.420695 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:39.420695 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:18:39.420695 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jprz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:39.420695 kubelet[2201]: E1002 19:18:39.420692 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:18:39.663070 kubelet[2201]: E1002 19:18:39.662908 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.183635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930518738.mount: Deactivated successfully. Oct 2 19:18:40.272962 env[1736]: time="2023-10-02T19:18:40.272859217Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:18:40.316901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157352298.mount: Deactivated successfully. Oct 2 19:18:40.340210 env[1736]: time="2023-10-02T19:18:40.340143982Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\"" Oct 2 19:18:40.341861 env[1736]: time="2023-10-02T19:18:40.341757502Z" level=info msg="StartContainer for \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\"" Oct 2 19:18:40.390564 systemd[1]: Started cri-containerd-789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5.scope. Oct 2 19:18:40.428541 systemd[1]: cri-containerd-789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5.scope: Deactivated successfully. Oct 2 19:18:40.467951 env[1736]: time="2023-10-02T19:18:40.467785829Z" level=info msg="shim disconnected" id=789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5 Oct 2 19:18:40.467951 env[1736]: time="2023-10-02T19:18:40.467863217Z" level=warning msg="cleaning up after shim disconnected" id=789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5 namespace=k8s.io Oct 2 19:18:40.467951 env[1736]: time="2023-10-02T19:18:40.467886029Z" level=info msg="cleaning up dead shim" Oct 2 19:18:40.505151 env[1736]: time="2023-10-02T19:18:40.505059268Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3096 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:40.505619 env[1736]: time="2023-10-02T19:18:40.505522228Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 19:18:40.508378 env[1736]: time="2023-10-02T19:18:40.508307116Z" level=error msg="Failed to pipe stdout of container \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\"" error="reading from a closed fifo" Oct 2 19:18:40.508513 env[1736]: time="2023-10-02T19:18:40.508416160Z" level=error msg="Failed to pipe stderr of container \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\"" error="reading from a closed fifo" Oct 2 19:18:40.510737 env[1736]: time="2023-10-02T19:18:40.510657088Z" level=error msg="StartContainer for \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:40.511168 kubelet[2201]: E1002 19:18:40.511081 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5" Oct 2 19:18:40.511642 kubelet[2201]: E1002 19:18:40.511481 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:40.511642 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:40.511642 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:18:40.511642 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jprz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:40.511642 kubelet[2201]: E1002 19:18:40.511572 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:18:40.663828 kubelet[2201]: E1002 19:18:40.663749 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:41.191670 env[1736]: time="2023-10-02T19:18:41.191611879Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:41.194260 env[1736]: time="2023-10-02T19:18:41.194212999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:41.196851 env[1736]: time="2023-10-02T19:18:41.196773751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:41.198029 env[1736]: time="2023-10-02T19:18:41.197962123Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:18:41.201668 env[1736]: time="2023-10-02T19:18:41.201614850Z" level=info msg="CreateContainer within sandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:18:41.225566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692864320.mount: Deactivated successfully. Oct 2 19:18:41.235279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145087191.mount: Deactivated successfully. Oct 2 19:18:41.242977 env[1736]: time="2023-10-02T19:18:41.242893876Z" level=info msg="CreateContainer within sandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\"" Oct 2 19:18:41.244062 env[1736]: time="2023-10-02T19:18:41.244013668Z" level=info msg="StartContainer for \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\"" Oct 2 19:18:41.266417 kubelet[2201]: I1002 19:18:41.266350 2201 scope.go:117] "RemoveContainer" containerID="1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e" Oct 2 19:18:41.267032 kubelet[2201]: I1002 19:18:41.266983 2201 scope.go:117] "RemoveContainer" containerID="1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e" Oct 2 19:18:41.270378 env[1736]: time="2023-10-02T19:18:41.270258255Z" level=info msg="RemoveContainer for \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\"" Oct 2 19:18:41.273840 env[1736]: time="2023-10-02T19:18:41.273721070Z" level=info msg="RemoveContainer for \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\"" Oct 2 19:18:41.274563 env[1736]: time="2023-10-02T19:18:41.273907250Z" level=error msg="RemoveContainer for \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\" failed" error="failed to set removing state for container \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\": container is already in removing state" Oct 2 19:18:41.274658 kubelet[2201]: E1002 19:18:41.274182 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\": container is already in removing state" containerID="1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e" Oct 2 19:18:41.274658 kubelet[2201]: E1002 19:18:41.274238 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e": container is already in removing state; Skipping pod "cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)" Oct 2 19:18:41.274798 kubelet[2201]: E1002 19:18:41.274711 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:18:41.278565 env[1736]: time="2023-10-02T19:18:41.278489750Z" level=info msg="RemoveContainer for \"1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e\" returns successfully" Oct 2 19:18:41.296739 systemd[1]: Started cri-containerd-c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226.scope. Oct 2 19:18:41.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344790 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:18:41.344898 kernel: audit: type=1400 audit(1696274321.341:747): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359968 kernel: audit: type=1400 audit(1696274321.344:748): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.372887 kernel: audit: type=1400 audit(1696274321.344:749): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.381074 kernel: audit: type=1400 audit(1696274321.344:750): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.389205 kernel: audit: type=1400 audit(1696274321.344:751): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.397388 kernel: audit: type=1400 audit(1696274321.344:752): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.405345 kernel: audit: type=1400 audit(1696274321.344:753): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.413330 kernel: audit: type=1400 audit(1696274321.344:754): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.413715 kernel: audit: type=1400 audit(1696274321.344:755): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.432823 kernel: audit: type=1400 audit(1696274321.351:756): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.351000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.351000 audit: BPF prog-id=89 op=LOAD Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=2957 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:41.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323631383162383464653137626461336436353832353234393133 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=2957 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:41.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323631383162383464653137626461336436353832353234393133 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit: BPF prog-id=90 op=LOAD Oct 2 19:18:41.359000 audit[3116]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=2957 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:41.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323631383162383464653137626461336436353832353234393133 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit: BPF prog-id=91 op=LOAD Oct 2 19:18:41.359000 audit[3116]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=2957 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:41.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323631383162383464653137626461336436353832353234393133 Oct 2 19:18:41.359000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:18:41.359000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { perfmon } for pid=3116 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit[3116]: AVC avc: denied { bpf } for pid=3116 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:41.359000 audit: BPF prog-id=92 op=LOAD Oct 2 19:18:41.359000 audit[3116]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=2957 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:41.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330323631383162383464653137626461336436353832353234393133 Oct 2 19:18:41.439337 env[1736]: time="2023-10-02T19:18:41.437065625Z" level=info msg="StartContainer for \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\" returns successfully" Oct 2 19:18:41.510000 audit[3127]: AVC avc: denied { map_create } for pid=3127 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c377,c830 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c377,c830 tclass=bpf permissive=0 Oct 2 19:18:41.510000 audit[3127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=40005a3768 a2=48 a3=0 items=0 ppid=2957 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c377,c830 key=(null) Oct 2 19:18:41.510000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:18:41.660300 kubelet[2201]: E1002 19:18:41.660242 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:41.664770 kubelet[2201]: E1002 19:18:41.664707 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:42.282240 kubelet[2201]: E1002 19:18:42.281782 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:18:42.494742 kubelet[2201]: W1002 19:18:42.494671 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda054447d_2a60_44bf_9c27_9351e1dac17f.slice/cri-containerd-1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e.scope WatchSource:0}: container "1db43bc51e80a2a26fd0f2876081b6c1bac624f0191818b2c6602d06257b672e" in namespace "k8s.io": not found Oct 2 19:18:42.665108 kubelet[2201]: E1002 19:18:42.664983 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:43.666232 kubelet[2201]: E1002 19:18:43.666187 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.666918 kubelet[2201]: E1002 19:18:44.666873 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:45.604658 kubelet[2201]: W1002 19:18:45.604600 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda054447d_2a60_44bf_9c27_9351e1dac17f.slice/cri-containerd-789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5.scope WatchSource:0}: task 789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5 not found: not found Oct 2 19:18:45.668409 kubelet[2201]: E1002 19:18:45.668366 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:46.660955 kubelet[2201]: E1002 19:18:46.660890 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:46.669606 kubelet[2201]: E1002 19:18:46.669559 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.670148 kubelet[2201]: E1002 19:18:47.670077 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.671768 kubelet[2201]: E1002 19:18:48.671724 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:49.673308 kubelet[2201]: E1002 19:18:49.673243 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:50.673848 kubelet[2201]: E1002 19:18:50.673782 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.662609 kubelet[2201]: E1002 19:18:51.662574 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:51.674489 kubelet[2201]: E1002 19:18:51.674458 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:52.675276 kubelet[2201]: E1002 19:18:52.675179 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.676090 kubelet[2201]: E1002 19:18:53.676024 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.758588 env[1736]: time="2023-10-02T19:18:53.758052760Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:18:53.779234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173567923.mount: Deactivated successfully. Oct 2 19:18:53.784924 env[1736]: time="2023-10-02T19:18:53.784835767Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\"" Oct 2 19:18:53.786145 env[1736]: time="2023-10-02T19:18:53.786045511Z" level=info msg="StartContainer for \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\"" Oct 2 19:18:53.840438 systemd[1]: Started cri-containerd-638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba.scope. Oct 2 19:18:53.879732 systemd[1]: cri-containerd-638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba.scope: Deactivated successfully. Oct 2 19:18:53.994826 env[1736]: time="2023-10-02T19:18:53.994660705Z" level=info msg="shim disconnected" id=638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba Oct 2 19:18:53.994826 env[1736]: time="2023-10-02T19:18:53.994742090Z" level=warning msg="cleaning up after shim disconnected" id=638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba namespace=k8s.io Oct 2 19:18:53.994826 env[1736]: time="2023-10-02T19:18:53.994765046Z" level=info msg="cleaning up dead shim" Oct 2 19:18:54.021420 env[1736]: time="2023-10-02T19:18:54.021333896Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3173 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:54.021898 env[1736]: time="2023-10-02T19:18:54.021810133Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:18:54.025379 env[1736]: time="2023-10-02T19:18:54.025316362Z" level=error msg="Failed to pipe stderr of container \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\"" error="reading from a closed fifo" Oct 2 19:18:54.025640 env[1736]: time="2023-10-02T19:18:54.025555200Z" level=error msg="Failed to pipe stdout of container \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\"" error="reading from a closed fifo" Oct 2 19:18:54.027925 env[1736]: time="2023-10-02T19:18:54.027849705Z" level=error msg="StartContainer for \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:54.028502 kubelet[2201]: E1002 19:18:54.028446 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba" Oct 2 19:18:54.036076 kubelet[2201]: E1002 19:18:54.036017 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:54.036076 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:54.036076 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:18:54.036076 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jprz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:54.036529 kubelet[2201]: E1002 19:18:54.036155 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:18:54.314571 kubelet[2201]: I1002 19:18:54.314454 2201 scope.go:117] "RemoveContainer" containerID="789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5" Oct 2 19:18:54.315024 kubelet[2201]: I1002 19:18:54.314995 2201 scope.go:117] "RemoveContainer" containerID="789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5" Oct 2 19:18:54.317255 env[1736]: time="2023-10-02T19:18:54.317199408Z" level=info msg="RemoveContainer for \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\"" Oct 2 19:18:54.318657 env[1736]: time="2023-10-02T19:18:54.318570301Z" level=info msg="RemoveContainer for \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\"" Oct 2 19:18:54.318893 env[1736]: time="2023-10-02T19:18:54.318821451Z" level=error msg="RemoveContainer for \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\" failed" error="failed to set removing state for container \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\": container is already in removing state" Oct 2 19:18:54.319186 kubelet[2201]: E1002 19:18:54.319116 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\": container is already in removing state" containerID="789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5" Oct 2 19:18:54.319360 kubelet[2201]: E1002 19:18:54.319337 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5": container is already in removing state; Skipping pod "cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)" Oct 2 19:18:54.319945 kubelet[2201]: E1002 19:18:54.319915 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:18:54.323737 env[1736]: time="2023-10-02T19:18:54.323679733Z" level=info msg="RemoveContainer for \"789ff5f7e2da40fa552c902fc4d112d956ae48f91ab060d5f331f72b7e514de5\" returns successfully" Oct 2 19:18:54.341681 kubelet[2201]: I1002 19:18:54.341628 2201 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-ckw7d" podStartSLOduration=13.956793021 podCreationTimestamp="2023-10-02 19:18:38 +0000 UTC" firstStartedPulling="2023-10-02 19:18:38.813699795 +0000 UTC m=+203.350519672" lastFinishedPulling="2023-10-02 19:18:41.198458695 +0000 UTC m=+205.735278560" observedRunningTime="2023-10-02 19:18:42.320880188 +0000 UTC m=+206.857700077" watchObservedRunningTime="2023-10-02 19:18:54.341551909 +0000 UTC m=+218.878371774" Oct 2 19:18:54.676278 kubelet[2201]: E1002 19:18:54.676243 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:54.770374 systemd[1]: run-containerd-runc-k8s.io-638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba-runc.cnEcPm.mount: Deactivated successfully. Oct 2 19:18:54.770539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba-rootfs.mount: Deactivated successfully. Oct 2 19:18:55.677689 kubelet[2201]: E1002 19:18:55.677648 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.474676 kubelet[2201]: E1002 19:18:56.474634 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.663942 kubelet[2201]: E1002 19:18:56.663903 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:56.678619 kubelet[2201]: E1002 19:18:56.678590 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:57.100660 kubelet[2201]: W1002 19:18:57.100590 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda054447d_2a60_44bf_9c27_9351e1dac17f.slice/cri-containerd-638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba.scope WatchSource:0}: task 638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba not found: not found Oct 2 19:18:57.680143 kubelet[2201]: E1002 19:18:57.680074 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.681191 kubelet[2201]: E1002 19:18:58.681108 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:59.681888 kubelet[2201]: E1002 19:18:59.681823 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.682073 kubelet[2201]: E1002 19:19:00.682034 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:01.665621 kubelet[2201]: E1002 19:19:01.665580 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:01.683791 kubelet[2201]: E1002 19:19:01.683767 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:02.685218 kubelet[2201]: E1002 19:19:02.685172 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.686512 kubelet[2201]: E1002 19:19:03.686442 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:04.686670 kubelet[2201]: E1002 19:19:04.686599 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.687532 kubelet[2201]: E1002 19:19:05.687472 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.755100 kubelet[2201]: E1002 19:19:05.755040 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:19:06.666873 kubelet[2201]: E1002 19:19:06.666840 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:06.688248 kubelet[2201]: E1002 19:19:06.688216 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.689376 kubelet[2201]: E1002 19:19:07.689329 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:08.690597 kubelet[2201]: E1002 19:19:08.690532 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:09.690969 kubelet[2201]: E1002 19:19:09.690928 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.691865 kubelet[2201]: E1002 19:19:10.691823 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:11.668812 kubelet[2201]: E1002 19:19:11.668754 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:11.693580 kubelet[2201]: E1002 19:19:11.693541 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.695150 kubelet[2201]: E1002 19:19:12.695036 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.695489 kubelet[2201]: E1002 19:19:13.695427 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:14.696287 kubelet[2201]: E1002 19:19:14.696221 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.697313 kubelet[2201]: E1002 19:19:15.697244 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:16.475213 kubelet[2201]: E1002 19:19:16.475157 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:16.508064 env[1736]: time="2023-10-02T19:19:16.507980667Z" level=info msg="StopPodSandbox for \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\"" Oct 2 19:19:16.508656 env[1736]: time="2023-10-02T19:19:16.508227509Z" level=info msg="TearDown network for sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" successfully" Oct 2 19:19:16.508656 env[1736]: time="2023-10-02T19:19:16.508312025Z" level=info msg="StopPodSandbox for \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" returns successfully" Oct 2 19:19:16.509608 env[1736]: time="2023-10-02T19:19:16.509334816Z" level=info msg="RemovePodSandbox for \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\"" Oct 2 19:19:16.509608 env[1736]: time="2023-10-02T19:19:16.509406901Z" level=info msg="Forcibly stopping sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\"" Oct 2 19:19:16.509608 env[1736]: time="2023-10-02T19:19:16.509535110Z" level=info msg="TearDown network for sandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" successfully" Oct 2 19:19:16.514357 env[1736]: time="2023-10-02T19:19:16.514282618Z" level=info msg="RemovePodSandbox \"1f4c4305d46b03db41fca1bf07f295066b212a4940b18f0b8f61f6916ed38269\" returns successfully" Oct 2 19:19:16.670031 kubelet[2201]: E1002 19:19:16.669757 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:16.697618 kubelet[2201]: E1002 19:19:16.697569 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.697821 kubelet[2201]: E1002 19:19:17.697753 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.698463 kubelet[2201]: E1002 19:19:18.698400 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.759318 env[1736]: time="2023-10-02T19:19:18.759180532Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:19:18.779406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781011461.mount: Deactivated successfully. Oct 2 19:19:18.790527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106021139.mount: Deactivated successfully. Oct 2 19:19:18.792075 env[1736]: time="2023-10-02T19:19:18.791986505Z" level=info msg="CreateContainer within sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\"" Oct 2 19:19:18.793197 env[1736]: time="2023-10-02T19:19:18.793088520Z" level=info msg="StartContainer for \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\"" Oct 2 19:19:18.842233 systemd[1]: Started cri-containerd-a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb.scope. Oct 2 19:19:18.880615 systemd[1]: cri-containerd-a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb.scope: Deactivated successfully. Oct 2 19:19:18.902008 env[1736]: time="2023-10-02T19:19:18.901933656Z" level=info msg="shim disconnected" id=a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb Oct 2 19:19:18.902333 env[1736]: time="2023-10-02T19:19:18.902008764Z" level=warning msg="cleaning up after shim disconnected" id=a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb namespace=k8s.io Oct 2 19:19:18.902333 env[1736]: time="2023-10-02T19:19:18.902034577Z" level=info msg="cleaning up dead shim" Oct 2 19:19:18.929315 env[1736]: time="2023-10-02T19:19:18.929228065Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3214 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:18Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:18.929781 env[1736]: time="2023-10-02T19:19:18.929683528Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:19:18.930165 env[1736]: time="2023-10-02T19:19:18.930075942Z" level=error msg="Failed to pipe stdout of container \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\"" error="reading from a closed fifo" Oct 2 19:19:18.931290 env[1736]: time="2023-10-02T19:19:18.931221218Z" level=error msg="Failed to pipe stderr of container \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\"" error="reading from a closed fifo" Oct 2 19:19:18.933655 env[1736]: time="2023-10-02T19:19:18.933583721Z" level=error msg="StartContainer for \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:18.933951 kubelet[2201]: E1002 19:19:18.933910 2201 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb" Oct 2 19:19:18.934110 kubelet[2201]: E1002 19:19:18.934062 2201 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:18.934110 kubelet[2201]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:18.934110 kubelet[2201]: rm /hostbin/cilium-mount Oct 2 19:19:18.934110 kubelet[2201]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4jprz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:18.934449 kubelet[2201]: E1002 19:19:18.934254 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:19:19.373685 kubelet[2201]: I1002 19:19:19.373651 2201 scope.go:117] "RemoveContainer" containerID="638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba" Oct 2 19:19:19.374723 kubelet[2201]: I1002 19:19:19.374696 2201 scope.go:117] "RemoveContainer" containerID="638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba" Oct 2 19:19:19.377285 env[1736]: time="2023-10-02T19:19:19.377221364Z" level=info msg="RemoveContainer for \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\"" Oct 2 19:19:19.378441 env[1736]: time="2023-10-02T19:19:19.378391972Z" level=info msg="RemoveContainer for \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\"" Oct 2 19:19:19.378809 env[1736]: time="2023-10-02T19:19:19.378755334Z" level=error msg="RemoveContainer for \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\" failed" error="failed to set removing state for container \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\": container is already in removing state" Oct 2 19:19:19.379320 kubelet[2201]: E1002 19:19:19.379277 2201 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\": container is already in removing state" containerID="638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba" Oct 2 19:19:19.379473 kubelet[2201]: E1002 19:19:19.379342 2201 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba": container is already in removing state; Skipping pod "cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)" Oct 2 19:19:19.379940 kubelet[2201]: E1002 19:19:19.379838 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:19:19.390243 env[1736]: time="2023-10-02T19:19:19.390158792Z" level=info msg="RemoveContainer for \"638529775d80cb68f736268fea2d1c564a2729a18cda6eb29086b6daf7f662ba\" returns successfully" Oct 2 19:19:19.699298 kubelet[2201]: E1002 19:19:19.699243 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:19.771387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb-rootfs.mount: Deactivated successfully. Oct 2 19:19:20.699411 kubelet[2201]: E1002 19:19:20.699371 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:21.670800 kubelet[2201]: E1002 19:19:21.670766 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:21.701145 kubelet[2201]: E1002 19:19:21.701063 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.008273 kubelet[2201]: W1002 19:19:22.008227 2201 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda054447d_2a60_44bf_9c27_9351e1dac17f.slice/cri-containerd-a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb.scope WatchSource:0}: task a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb not found: not found Oct 2 19:19:22.701270 kubelet[2201]: E1002 19:19:22.701201 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.702460 kubelet[2201]: E1002 19:19:23.702362 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:24.703102 kubelet[2201]: E1002 19:19:24.703055 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.704537 kubelet[2201]: E1002 19:19:25.704474 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:26.677732 kubelet[2201]: E1002 19:19:26.677692 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:26.705211 kubelet[2201]: E1002 19:19:26.705177 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.706552 kubelet[2201]: E1002 19:19:27.706483 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.707398 kubelet[2201]: E1002 19:19:28.707335 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:29.707838 kubelet[2201]: E1002 19:19:29.707754 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.708955 kubelet[2201]: E1002 19:19:30.708875 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.756787 kubelet[2201]: E1002 19:19:30.756091 2201 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-g9zw8_kube-system(a054447d-2a60-44bf-9c27-9351e1dac17f)\"" pod="kube-system/cilium-g9zw8" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" Oct 2 19:19:31.679996 kubelet[2201]: E1002 19:19:31.679774 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:31.709806 kubelet[2201]: E1002 19:19:31.709747 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:32.710923 kubelet[2201]: E1002 19:19:32.710882 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.712709 kubelet[2201]: E1002 19:19:33.712667 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:34.713987 kubelet[2201]: E1002 19:19:34.713921 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.715104 kubelet[2201]: E1002 19:19:35.715034 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.474675 kubelet[2201]: E1002 19:19:36.474608 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.681220 kubelet[2201]: E1002 19:19:36.681187 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:36.715865 kubelet[2201]: E1002 19:19:36.715821 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:37.717424 kubelet[2201]: E1002 19:19:37.717367 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.718746 kubelet[2201]: E1002 19:19:38.718669 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:39.719853 kubelet[2201]: E1002 19:19:39.719782 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.115947 env[1736]: time="2023-10-02T19:19:40.115806489Z" level=info msg="StopPodSandbox for \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\"" Oct 2 19:19:40.116668 env[1736]: time="2023-10-02T19:19:40.116621256Z" level=info msg="Container to stop \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:19:40.119174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6-shm.mount: Deactivated successfully. Oct 2 19:19:40.143336 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:19:40.143486 kernel: audit: type=1334 audit(1696274380.137:766): prog-id=85 op=UNLOAD Oct 2 19:19:40.137000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:19:40.138476 systemd[1]: cri-containerd-4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6.scope: Deactivated successfully. Oct 2 19:19:40.146429 env[1736]: time="2023-10-02T19:19:40.146303340Z" level=info msg="StopContainer for \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\" with timeout 30 (s)" Oct 2 19:19:40.146837 env[1736]: time="2023-10-02T19:19:40.146780211Z" level=info msg="Stop container \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\" with signal terminated" Oct 2 19:19:40.147000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:19:40.152186 kernel: audit: type=1334 audit(1696274380.147:767): prog-id=88 op=UNLOAD Oct 2 19:19:40.197427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6-rootfs.mount: Deactivated successfully. Oct 2 19:19:40.203813 systemd[1]: cri-containerd-c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226.scope: Deactivated successfully. Oct 2 19:19:40.208528 kernel: audit: type=1334 audit(1696274380.203:768): prog-id=89 op=UNLOAD Oct 2 19:19:40.203000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:19:40.209000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:19:40.214238 kernel: audit: type=1334 audit(1696274380.209:769): prog-id=92 op=UNLOAD Oct 2 19:19:40.224687 env[1736]: time="2023-10-02T19:19:40.224618708Z" level=info msg="shim disconnected" id=4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6 Oct 2 19:19:40.225766 env[1736]: time="2023-10-02T19:19:40.225716365Z" level=warning msg="cleaning up after shim disconnected" id=4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6 namespace=k8s.io Oct 2 19:19:40.225969 env[1736]: time="2023-10-02T19:19:40.225938462Z" level=info msg="cleaning up dead shim" Oct 2 19:19:40.258169 env[1736]: time="2023-10-02T19:19:40.258077318Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3259 runtime=io.containerd.runc.v2\n" Oct 2 19:19:40.258918 env[1736]: time="2023-10-02T19:19:40.258874938Z" level=info msg="TearDown network for sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" successfully" Oct 2 19:19:40.259116 env[1736]: time="2023-10-02T19:19:40.259076911Z" level=info msg="StopPodSandbox for \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" returns successfully" Oct 2 19:19:40.273623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226-rootfs.mount: Deactivated successfully. Oct 2 19:19:40.285457 env[1736]: time="2023-10-02T19:19:40.285391911Z" level=info msg="shim disconnected" id=c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226 Oct 2 19:19:40.285879 env[1736]: time="2023-10-02T19:19:40.285833849Z" level=warning msg="cleaning up after shim disconnected" id=c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226 namespace=k8s.io Oct 2 19:19:40.286036 env[1736]: time="2023-10-02T19:19:40.286007142Z" level=info msg="cleaning up dead shim" Oct 2 19:19:40.312667 env[1736]: time="2023-10-02T19:19:40.312611919Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3281 runtime=io.containerd.runc.v2\n" Oct 2 19:19:40.315784 env[1736]: time="2023-10-02T19:19:40.315731526Z" level=info msg="StopContainer for \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\" returns successfully" Oct 2 19:19:40.317172 env[1736]: time="2023-10-02T19:19:40.317085516Z" level=info msg="StopPodSandbox for \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\"" Oct 2 19:19:40.317417 env[1736]: time="2023-10-02T19:19:40.317203045Z" level=info msg="Container to stop \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:19:40.319589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2-shm.mount: Deactivated successfully. Oct 2 19:19:40.340106 systemd[1]: cri-containerd-a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2.scope: Deactivated successfully. Oct 2 19:19:40.339000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:19:40.344389 kernel: audit: type=1334 audit(1696274380.339:770): prog-id=81 op=UNLOAD Oct 2 19:19:40.345000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:19:40.349189 kernel: audit: type=1334 audit(1696274380.345:771): prog-id=84 op=UNLOAD Oct 2 19:19:40.360723 kubelet[2201]: I1002 19:19:40.360613 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-etc-cni-netd\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.360723 kubelet[2201]: I1002 19:19:40.360669 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.360723 kubelet[2201]: I1002 19:19:40.360716 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jprz\" (UniqueName: \"kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-kube-api-access-4jprz\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361092 kubelet[2201]: I1002 19:19:40.360792 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-ipsec-secrets\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361092 kubelet[2201]: I1002 19:19:40.360835 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cni-path\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361092 kubelet[2201]: I1002 19:19:40.360915 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-bpf-maps\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361092 kubelet[2201]: I1002 19:19:40.360985 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-hostproc\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361092 kubelet[2201]: I1002 19:19:40.361050 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-cgroup\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361092 kubelet[2201]: I1002 19:19:40.361092 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-run\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361483 kubelet[2201]: I1002 19:19:40.361174 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-xtables-lock\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361483 kubelet[2201]: I1002 19:19:40.361247 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-kernel\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361483 kubelet[2201]: I1002 19:19:40.361316 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-clustermesh-secrets\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361483 kubelet[2201]: I1002 19:19:40.361366 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-hubble-tls\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361483 kubelet[2201]: I1002 19:19:40.361431 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-net\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361778 kubelet[2201]: I1002 19:19:40.361495 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-lib-modules\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361778 kubelet[2201]: I1002 19:19:40.361544 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-config-path\") pod \"a054447d-2a60-44bf-9c27-9351e1dac17f\" (UID: \"a054447d-2a60-44bf-9c27-9351e1dac17f\") " Oct 2 19:19:40.361778 kubelet[2201]: I1002 19:19:40.361610 2201 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-etc-cni-netd\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.362004 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.362209 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.362256 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.362941 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cni-path" (OuterVolumeSpecName: "cni-path") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.362999 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.363039 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-hostproc" (OuterVolumeSpecName: "hostproc") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365144 kubelet[2201]: I1002 19:19:40.363078 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365688 kubelet[2201]: I1002 19:19:40.365431 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.365688 kubelet[2201]: I1002 19:19:40.365494 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:40.370257 kubelet[2201]: I1002 19:19:40.370113 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:19:40.379345 systemd[1]: var-lib-kubelet-pods-a054447d\x2d2a60\x2d44bf\x2d9c27\x2d9351e1dac17f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:19:40.383539 kubelet[2201]: I1002 19:19:40.383475 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:40.397684 kubelet[2201]: I1002 19:19:40.397586 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:19:40.399658 kubelet[2201]: I1002 19:19:40.399602 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-kube-api-access-4jprz" (OuterVolumeSpecName: "kube-api-access-4jprz") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "kube-api-access-4jprz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:40.401233 kubelet[2201]: I1002 19:19:40.401188 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a054447d-2a60-44bf-9c27-9351e1dac17f" (UID: "a054447d-2a60-44bf-9c27-9351e1dac17f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:19:40.424919 kubelet[2201]: I1002 19:19:40.424864 2201 scope.go:117] "RemoveContainer" containerID="a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb" Oct 2 19:19:40.433188 env[1736]: time="2023-10-02T19:19:40.433089083Z" level=info msg="RemoveContainer for \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\"" Oct 2 19:19:40.438204 systemd[1]: Removed slice kubepods-burstable-poda054447d_2a60_44bf_9c27_9351e1dac17f.slice. Oct 2 19:19:40.439774 env[1736]: time="2023-10-02T19:19:40.439690651Z" level=info msg="shim disconnected" id=a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2 Oct 2 19:19:40.440025 env[1736]: time="2023-10-02T19:19:40.439990676Z" level=warning msg="cleaning up after shim disconnected" id=a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2 namespace=k8s.io Oct 2 19:19:40.440276 env[1736]: time="2023-10-02T19:19:40.440244406Z" level=info msg="cleaning up dead shim" Oct 2 19:19:40.440492 env[1736]: time="2023-10-02T19:19:40.440429135Z" level=info msg="RemoveContainer for \"a7cd21580be6804712332b56a6e9461766ca58faf1ef16bb21cbcded1cff04bb\" returns successfully" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.461926 2201 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-kernel\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.461976 2201 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-clustermesh-secrets\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462001 2201 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-hubble-tls\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462024 2201 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-host-proc-sys-net\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462047 2201 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-lib-modules\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462070 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-config-path\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462093 2201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4jprz\" (UniqueName: \"kubernetes.io/projected/a054447d-2a60-44bf-9c27-9351e1dac17f-kube-api-access-4jprz\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462131 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-ipsec-secrets\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462160 2201 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-bpf-maps\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462187 2201 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-hostproc\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462210 2201 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cni-path\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462232 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-cgroup\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462254 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-cilium-run\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.462343 kubelet[2201]: I1002 19:19:40.462277 2201 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a054447d-2a60-44bf-9c27-9351e1dac17f-xtables-lock\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.471474 env[1736]: time="2023-10-02T19:19:40.471415121Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3318 runtime=io.containerd.runc.v2\n" Oct 2 19:19:40.472309 env[1736]: time="2023-10-02T19:19:40.472261341Z" level=info msg="TearDown network for sandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" successfully" Oct 2 19:19:40.472497 env[1736]: time="2023-10-02T19:19:40.472460926Z" level=info msg="StopPodSandbox for \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" returns successfully" Oct 2 19:19:40.562476 kubelet[2201]: I1002 19:19:40.562418 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4565b06-d10d-4d8e-a28e-b41e49f8343a-cilium-config-path\") pod \"a4565b06-d10d-4d8e-a28e-b41e49f8343a\" (UID: \"a4565b06-d10d-4d8e-a28e-b41e49f8343a\") " Oct 2 19:19:40.562672 kubelet[2201]: I1002 19:19:40.562490 2201 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qr65\" (UniqueName: \"kubernetes.io/projected/a4565b06-d10d-4d8e-a28e-b41e49f8343a-kube-api-access-5qr65\") pod \"a4565b06-d10d-4d8e-a28e-b41e49f8343a\" (UID: \"a4565b06-d10d-4d8e-a28e-b41e49f8343a\") " Oct 2 19:19:40.569774 kubelet[2201]: I1002 19:19:40.569721 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4565b06-d10d-4d8e-a28e-b41e49f8343a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a4565b06-d10d-4d8e-a28e-b41e49f8343a" (UID: "a4565b06-d10d-4d8e-a28e-b41e49f8343a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:19:40.571389 kubelet[2201]: I1002 19:19:40.571344 2201 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4565b06-d10d-4d8e-a28e-b41e49f8343a-kube-api-access-5qr65" (OuterVolumeSpecName: "kube-api-access-5qr65") pod "a4565b06-d10d-4d8e-a28e-b41e49f8343a" (UID: "a4565b06-d10d-4d8e-a28e-b41e49f8343a"). InnerVolumeSpecName "kube-api-access-5qr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:40.663773 kubelet[2201]: I1002 19:19:40.663638 2201 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4565b06-d10d-4d8e-a28e-b41e49f8343a-cilium-config-path\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.663773 kubelet[2201]: I1002 19:19:40.663694 2201 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5qr65\" (UniqueName: \"kubernetes.io/projected/a4565b06-d10d-4d8e-a28e-b41e49f8343a-kube-api-access-5qr65\") on node \"172.31.24.89\" DevicePath \"\"" Oct 2 19:19:40.720879 kubelet[2201]: E1002 19:19:40.720824 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.759651 kubelet[2201]: I1002 19:19:40.759598 2201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a054447d-2a60-44bf-9c27-9351e1dac17f" path="/var/lib/kubelet/pods/a054447d-2a60-44bf-9c27-9351e1dac17f/volumes" Oct 2 19:19:40.769255 systemd[1]: Removed slice kubepods-besteffort-poda4565b06_d10d_4d8e_a28e_b41e49f8343a.slice. Oct 2 19:19:41.119054 systemd[1]: var-lib-kubelet-pods-a054447d\x2d2a60\x2d44bf\x2d9c27\x2d9351e1dac17f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jprz.mount: Deactivated successfully. Oct 2 19:19:41.119256 systemd[1]: var-lib-kubelet-pods-a054447d\x2d2a60\x2d44bf\x2d9c27\x2d9351e1dac17f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:19:41.119395 systemd[1]: var-lib-kubelet-pods-a054447d\x2d2a60\x2d44bf\x2d9c27\x2d9351e1dac17f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:19:41.119528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2-rootfs.mount: Deactivated successfully. Oct 2 19:19:41.119656 systemd[1]: var-lib-kubelet-pods-a4565b06\x2dd10d\x2d4d8e\x2da28e\x2db41e49f8343a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qr65.mount: Deactivated successfully. Oct 2 19:19:41.439773 kubelet[2201]: I1002 19:19:41.439229 2201 scope.go:117] "RemoveContainer" containerID="c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226" Oct 2 19:19:41.443894 env[1736]: time="2023-10-02T19:19:41.443828696Z" level=info msg="RemoveContainer for \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\"" Oct 2 19:19:41.448682 env[1736]: time="2023-10-02T19:19:41.448608775Z" level=info msg="RemoveContainer for \"c026181b84de17bda3d65825249134663e46546301d8d9ae8f54eb3d1bf60226\" returns successfully" Oct 2 19:19:41.683318 kubelet[2201]: E1002 19:19:41.683264 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:41.722047 kubelet[2201]: E1002 19:19:41.721937 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:42.723141 kubelet[2201]: E1002 19:19:42.723063 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:42.758625 kubelet[2201]: I1002 19:19:42.758552 2201 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a4565b06-d10d-4d8e-a28e-b41e49f8343a" path="/var/lib/kubelet/pods/a4565b06-d10d-4d8e-a28e-b41e49f8343a/volumes" Oct 2 19:19:43.723954 kubelet[2201]: E1002 19:19:43.723878 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:44.725176 kubelet[2201]: E1002 19:19:44.725103 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.726493 kubelet[2201]: E1002 19:19:45.726423 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.684208 kubelet[2201]: E1002 19:19:46.684173 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:46.726855 kubelet[2201]: E1002 19:19:46.726827 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:47.728404 kubelet[2201]: E1002 19:19:47.728332 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:48.729246 kubelet[2201]: E1002 19:19:48.729181 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:49.730247 kubelet[2201]: E1002 19:19:49.730181 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.730657 kubelet[2201]: E1002 19:19:50.730589 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:51.686103 kubelet[2201]: E1002 19:19:51.686047 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:51.731017 kubelet[2201]: E1002 19:19:51.730983 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.732194 kubelet[2201]: E1002 19:19:52.732148 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:53.732975 kubelet[2201]: E1002 19:19:53.732898 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.733136 kubelet[2201]: E1002 19:19:54.733070 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.733739 kubelet[2201]: E1002 19:19:55.733697 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:56.474793 kubelet[2201]: E1002 19:19:56.474706 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:56.686997 kubelet[2201]: E1002 19:19:56.686939 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:56.735214 kubelet[2201]: E1002 19:19:56.735043 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:57.735884 kubelet[2201]: E1002 19:19:57.735839 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:58.737336 kubelet[2201]: E1002 19:19:58.737292 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:59.739077 kubelet[2201]: E1002 19:19:59.739008 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.739393 kubelet[2201]: E1002 19:20:00.739350 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:01.688528 kubelet[2201]: E1002 19:20:01.688496 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:01.740822 kubelet[2201]: E1002 19:20:01.740772 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.741868 kubelet[2201]: E1002 19:20:02.741799 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.105139 kubelet[2201]: E1002 19:20:03.104991 2201 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:20:03.742999 kubelet[2201]: E1002 19:20:03.742947 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:04.744709 kubelet[2201]: E1002 19:20:04.744644 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.745791 kubelet[2201]: E1002 19:20:05.745722 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:06.689872 kubelet[2201]: E1002 19:20:06.689809 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:06.746237 kubelet[2201]: E1002 19:20:06.746173 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:06.845392 amazon-ssm-agent[1715]: 2023-10-02 19:20:06 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:20:06.945923 amazon-ssm-agent[1715]: 2023-10-02 19:20:06 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-00a3a66153216e1a6 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-00a3a66153216e1a6 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:20:06.945923 amazon-ssm-agent[1715]: status code: 400, request id: 61dd5e9a-a8d8-4623-aead-933079dbf743 Oct 2 19:20:07.747293 kubelet[2201]: E1002 19:20:07.747222 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:08.748321 kubelet[2201]: E1002 19:20:08.748279 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:09.749913 kubelet[2201]: E1002 19:20:09.749850 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.750958 kubelet[2201]: E1002 19:20:10.750885 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:11.691054 kubelet[2201]: E1002 19:20:11.691007 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:11.751076 kubelet[2201]: E1002 19:20:11.751027 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.751782 kubelet[2201]: E1002 19:20:12.751739 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.106646 kubelet[2201]: E1002 19:20:13.106514 2201 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 19:20:13.752863 kubelet[2201]: E1002 19:20:13.752762 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:14.754261 kubelet[2201]: E1002 19:20:14.754219 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.755870 kubelet[2201]: E1002 19:20:15.755827 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.904399 kubelet[2201]: E1002 19:20:15.904362 2201 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": unexpected EOF" Oct 2 19:20:15.905437 kubelet[2201]: E1002 19:20:15.905394 2201 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": dial tcp 172.31.29.254:6443: connect: connection refused" Oct 2 19:20:15.906021 kubelet[2201]: E1002 19:20:15.905992 2201 controller.go:193] "Failed to update lease" err="Put \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": dial tcp 172.31.29.254:6443: connect: connection refused" Oct 2 19:20:15.906334 kubelet[2201]: I1002 19:20:15.906309 2201 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Oct 2 19:20:15.907032 kubelet[2201]: E1002 19:20:15.906973 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": dial tcp 172.31.29.254:6443: connect: connection refused" interval="200ms" Oct 2 19:20:16.108033 kubelet[2201]: E1002 19:20:16.107916 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": dial tcp 172.31.29.254:6443: connect: connection refused" interval="400ms" Oct 2 19:20:16.475474 kubelet[2201]: E1002 19:20:16.475433 2201 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:16.509149 kubelet[2201]: E1002 19:20:16.509088 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": dial tcp 172.31.29.254:6443: connect: connection refused" interval="800ms" Oct 2 19:20:16.518180 env[1736]: time="2023-10-02T19:20:16.518101363Z" level=info msg="StopPodSandbox for \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\"" Oct 2 19:20:16.518992 env[1736]: time="2023-10-02T19:20:16.518896138Z" level=info msg="TearDown network for sandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" successfully" Oct 2 19:20:16.519187 env[1736]: time="2023-10-02T19:20:16.519111658Z" level=info msg="StopPodSandbox for \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" returns successfully" Oct 2 19:20:16.520015 env[1736]: time="2023-10-02T19:20:16.519953377Z" level=info msg="RemovePodSandbox for \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\"" Oct 2 19:20:16.520301 env[1736]: time="2023-10-02T19:20:16.520223570Z" level=info msg="Forcibly stopping sandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\"" Oct 2 19:20:16.520559 env[1736]: time="2023-10-02T19:20:16.520523763Z" level=info msg="TearDown network for sandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" successfully" Oct 2 19:20:16.526634 env[1736]: time="2023-10-02T19:20:16.526546569Z" level=info msg="RemovePodSandbox \"a277bf5fd49ba93dc901aa84fbbfabdc0c23b62cbfee857d83e0e9027a4e44d2\" returns successfully" Oct 2 19:20:16.527634 env[1736]: time="2023-10-02T19:20:16.527569344Z" level=info msg="StopPodSandbox for \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\"" Oct 2 19:20:16.527993 env[1736]: time="2023-10-02T19:20:16.527908561Z" level=info msg="TearDown network for sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" successfully" Oct 2 19:20:16.528406 env[1736]: time="2023-10-02T19:20:16.528108889Z" level=info msg="StopPodSandbox for \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" returns successfully" Oct 2 19:20:16.529262 env[1736]: time="2023-10-02T19:20:16.529214681Z" level=info msg="RemovePodSandbox for \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\"" Oct 2 19:20:16.529403 env[1736]: time="2023-10-02T19:20:16.529268153Z" level=info msg="Forcibly stopping sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\"" Oct 2 19:20:16.529403 env[1736]: time="2023-10-02T19:20:16.529388357Z" level=info msg="TearDown network for sandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" successfully" Oct 2 19:20:16.533669 env[1736]: time="2023-10-02T19:20:16.533593638Z" level=info msg="RemovePodSandbox \"4feadb73dba0f84388fad9fb95e4bae196010a62fd0bd63b9cd24cd672e26ff6\" returns successfully" Oct 2 19:20:16.547096 kubelet[2201]: W1002 19:20:16.547060 2201 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:20:16.692507 kubelet[2201]: E1002 19:20:16.692420 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:16.757361 kubelet[2201]: E1002 19:20:16.757193 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:17.757430 kubelet[2201]: E1002 19:20:17.757384 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.758226 kubelet[2201]: E1002 19:20:18.758180 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:19.758592 kubelet[2201]: E1002 19:20:19.758516 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.759076 kubelet[2201]: E1002 19:20:20.759020 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:21.693601 kubelet[2201]: E1002 19:20:21.693568 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:21.759385 kubelet[2201]: E1002 19:20:21.759360 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.761070 kubelet[2201]: E1002 19:20:22.761008 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:23.761448 kubelet[2201]: E1002 19:20:23.761413 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.763210 kubelet[2201]: E1002 19:20:24.763144 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:25.764278 kubelet[2201]: E1002 19:20:25.764220 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:26.694490 kubelet[2201]: E1002 19:20:26.694456 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:26.764845 kubelet[2201]: E1002 19:20:26.764813 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:27.310460 kubelet[2201]: E1002 19:20:27.310420 2201 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.89?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Oct 2 19:20:27.766536 kubelet[2201]: E1002 19:20:27.766502 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:28.767651 kubelet[2201]: E1002 19:20:28.767597 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:29.768052 kubelet[2201]: E1002 19:20:29.767999 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:30.768401 kubelet[2201]: E1002 19:20:30.768368 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:31.695809 kubelet[2201]: E1002 19:20:31.695715 2201 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:31.769853 kubelet[2201]: E1002 19:20:31.769824 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:32.770515 kubelet[2201]: E1002 19:20:32.770480 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:33.772281 kubelet[2201]: E1002 19:20:33.772221 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:34.772361 kubelet[2201]: E1002 19:20:34.772327 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:35.114837 kubelet[2201]: E1002 19:20:35.114705 2201 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.24.89\": Get \"https://172.31.29.254:6443/api/v1/nodes/172.31.24.89?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 2 19:20:35.774071 kubelet[2201]: E1002 19:20:35.774034 2201 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"