Oct 2 19:14:36.162328 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:14:36.162363 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:14:36.162385 kernel: efi: EFI v2.70 by EDK II Oct 2 19:14:36.162400 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:14:36.162414 kernel: ACPI: Early table checksum verification disabled Oct 2 19:14:36.162427 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:14:36.162443 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:14:36.162477 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:14:36.162492 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:14:36.162506 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:14:36.162525 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:14:36.162539 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:14:36.162552 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:14:36.162566 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:14:36.162582 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:14:36.162601 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:14:36.162615 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:14:36.162630 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:14:36.162644 kernel: printk: bootconsole [uart0] enabled Oct 2 19:14:36.162658 kernel: NUMA: Failed to initialise from firmware Oct 2 19:14:36.162673 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:14:36.162688 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:14:36.162702 kernel: Zone ranges: Oct 2 19:14:36.162717 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:14:36.162731 kernel: DMA32 empty Oct 2 19:14:36.162745 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:14:36.162763 kernel: Movable zone start for each node Oct 2 19:14:36.162778 kernel: Early memory node ranges Oct 2 19:14:36.162792 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:14:36.162806 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:14:36.162821 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:14:36.162835 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:14:36.162849 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:14:36.162863 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:14:36.162877 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:14:36.162892 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:14:36.162923 kernel: psci: probing for conduit method from ACPI. Oct 2 19:14:36.162941 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:14:36.162961 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:14:36.162976 kernel: psci: Trusted OS migration not required Oct 2 19:14:36.162996 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:14:36.163012 kernel: ACPI: SRAT not present Oct 2 19:14:36.163028 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:14:36.163047 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:14:36.163062 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:14:36.163077 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:14:36.163092 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:14:36.163107 kernel: CPU features: detected: Spectre-v2 Oct 2 19:14:36.163122 kernel: CPU features: detected: Spectre-v3a Oct 2 19:14:36.163137 kernel: CPU features: detected: Spectre-BHB Oct 2 19:14:36.163152 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:14:36.163167 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:14:36.163182 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:14:36.163197 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:14:36.163215 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:14:36.163231 kernel: Policy zone: Normal Oct 2 19:14:36.163248 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:14:36.163264 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:14:36.163279 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:14:36.163295 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:14:36.163310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:14:36.163325 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:14:36.163341 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:14:36.163357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:14:36.163375 kernel: trace event string verifier disabled Oct 2 19:14:36.163390 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:14:36.163406 kernel: rcu: RCU event tracing is enabled. Oct 2 19:14:36.163421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:14:36.163437 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:14:36.163452 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:14:36.163468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:14:36.163483 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:14:36.163498 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:14:36.163513 kernel: GICv3: 96 SPIs implemented Oct 2 19:14:36.163527 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:14:36.163542 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:14:36.163561 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:14:36.163576 kernel: GICv3: 16 PPIs implemented Oct 2 19:14:36.163591 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:14:36.163606 kernel: ACPI: SRAT not present Oct 2 19:14:36.163620 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:14:36.163635 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:14:36.163651 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:14:36.163666 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:14:36.163681 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:14:36.163696 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:14:36.163711 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:14:36.163730 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:14:36.163746 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:14:36.163761 kernel: Console: colour dummy device 80x25 Oct 2 19:14:36.163776 kernel: printk: console [tty1] enabled Oct 2 19:14:36.163792 kernel: ACPI: Core revision 20210730 Oct 2 19:14:36.163808 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:14:36.163823 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:14:36.163839 kernel: LSM: Security Framework initializing Oct 2 19:14:36.163854 kernel: SELinux: Initializing. Oct 2 19:14:36.163869 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:14:36.163889 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:14:36.163922 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:14:36.163942 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:14:36.163958 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:14:36.163974 kernel: Remapping and enabling EFI services. Oct 2 19:14:36.163989 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:14:36.164005 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:14:36.164020 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:14:36.164036 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:14:36.164057 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:14:36.164072 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:14:36.164088 kernel: SMP: Total of 2 processors activated. Oct 2 19:14:36.164103 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:14:36.164119 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:14:36.164134 kernel: CPU features: detected: CRC32 instructions Oct 2 19:14:36.164149 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:14:36.164165 kernel: alternatives: patching kernel code Oct 2 19:14:36.164180 kernel: devtmpfs: initialized Oct 2 19:14:36.164199 kernel: KASLR disabled due to lack of seed Oct 2 19:14:36.164215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:14:36.164232 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:14:36.164258 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:14:36.164278 kernel: SMBIOS 3.0.0 present. Oct 2 19:14:36.164294 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:14:36.164310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:14:36.164326 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:14:36.164343 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:14:36.164359 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:14:36.164375 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:14:36.164392 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Oct 2 19:14:36.164412 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:14:36.164428 kernel: cpuidle: using governor menu Oct 2 19:14:36.164444 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:14:36.164460 kernel: ASID allocator initialised with 32768 entries Oct 2 19:14:36.164476 kernel: ACPI: bus type PCI registered Oct 2 19:14:36.164496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:14:36.164513 kernel: Serial: AMBA PL011 UART driver Oct 2 19:14:36.164529 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:14:36.164545 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:14:36.164561 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:14:36.164577 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:14:36.164593 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:14:36.164609 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:14:36.164625 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:14:36.164645 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:14:36.164661 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:14:36.164677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:14:36.164693 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:14:36.164709 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:14:36.164725 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:14:36.164741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:14:36.164757 kernel: ACPI: Interpreter enabled Oct 2 19:14:36.164773 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:14:36.164793 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:14:36.164809 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:14:36.165215 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:14:36.165414 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:14:36.165603 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:14:36.165792 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:14:36.166017 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:14:36.166047 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:14:36.166065 kernel: acpiphp: Slot [1] registered Oct 2 19:14:36.166081 kernel: acpiphp: Slot [2] registered Oct 2 19:14:36.166098 kernel: acpiphp: Slot [3] registered Oct 2 19:14:36.166114 kernel: acpiphp: Slot [4] registered Oct 2 19:14:36.166130 kernel: acpiphp: Slot [5] registered Oct 2 19:14:36.166146 kernel: acpiphp: Slot [6] registered Oct 2 19:14:36.166162 kernel: acpiphp: Slot [7] registered Oct 2 19:14:36.166178 kernel: acpiphp: Slot [8] registered Oct 2 19:14:36.166198 kernel: acpiphp: Slot [9] registered Oct 2 19:14:36.166214 kernel: acpiphp: Slot [10] registered Oct 2 19:14:36.166231 kernel: acpiphp: Slot [11] registered Oct 2 19:14:36.166247 kernel: acpiphp: Slot [12] registered Oct 2 19:14:36.166263 kernel: acpiphp: Slot [13] registered Oct 2 19:14:36.166279 kernel: acpiphp: Slot [14] registered Oct 2 19:14:36.166295 kernel: acpiphp: Slot [15] registered Oct 2 19:14:36.166310 kernel: acpiphp: Slot [16] registered Oct 2 19:14:36.166326 kernel: acpiphp: Slot [17] registered Oct 2 19:14:36.166342 kernel: acpiphp: Slot [18] registered Oct 2 19:14:36.166363 kernel: acpiphp: Slot [19] registered Oct 2 19:14:36.166379 kernel: acpiphp: Slot [20] registered Oct 2 19:14:36.166395 kernel: acpiphp: Slot [21] registered Oct 2 19:14:36.166411 kernel: acpiphp: Slot [22] registered Oct 2 19:14:36.166427 kernel: acpiphp: Slot [23] registered Oct 2 19:14:36.166443 kernel: acpiphp: Slot [24] registered Oct 2 19:14:36.166478 kernel: acpiphp: Slot [25] registered Oct 2 19:14:36.166495 kernel: acpiphp: Slot [26] registered Oct 2 19:14:36.166512 kernel: acpiphp: Slot [27] registered Oct 2 19:14:36.166532 kernel: acpiphp: Slot [28] registered Oct 2 19:14:36.166549 kernel: acpiphp: Slot [29] registered Oct 2 19:14:36.166564 kernel: acpiphp: Slot [30] registered Oct 2 19:14:36.166580 kernel: acpiphp: Slot [31] registered Oct 2 19:14:36.166597 kernel: PCI host bridge to bus 0000:00 Oct 2 19:14:36.166804 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:14:36.168065 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:14:36.168304 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:14:36.168538 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:14:36.168791 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:14:36.169159 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:14:36.169370 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:14:36.169584 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:14:36.169780 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:14:36.170072 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:14:36.170282 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:14:36.170493 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:14:36.170690 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:14:36.170882 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:14:36.171098 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:14:36.171289 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:14:36.171488 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:14:36.173367 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:14:36.181220 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:14:36.181468 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:14:36.181678 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:14:36.181882 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:14:36.182112 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:14:36.182146 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:14:36.182163 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:14:36.182181 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:14:36.182197 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:14:36.182213 kernel: iommu: Default domain type: Translated Oct 2 19:14:36.182230 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:14:36.182246 kernel: vgaarb: loaded Oct 2 19:14:36.182262 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:14:36.182278 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:14:36.182299 kernel: PTP clock support registered Oct 2 19:14:36.182315 kernel: Registered efivars operations Oct 2 19:14:36.182332 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:14:36.182348 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:14:36.182364 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:14:36.182380 kernel: pnp: PnP ACPI init Oct 2 19:14:36.182639 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:14:36.182666 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:14:36.182683 kernel: NET: Registered PF_INET protocol family Oct 2 19:14:36.182705 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:14:36.182722 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:14:36.182739 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:14:36.182755 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:14:36.182772 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:14:36.182788 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:14:36.182804 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:14:36.182821 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:14:36.182837 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:14:36.182857 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:14:36.182874 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:14:36.182890 kernel: kvm [1]: HYP mode not available Oct 2 19:14:36.182924 kernel: Initialise system trusted keyrings Oct 2 19:14:36.182945 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:14:36.182962 kernel: Key type asymmetric registered Oct 2 19:14:36.182978 kernel: Asymmetric key parser 'x509' registered Oct 2 19:14:36.182995 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:14:36.183011 kernel: io scheduler mq-deadline registered Oct 2 19:14:36.183034 kernel: io scheduler kyber registered Oct 2 19:14:36.183050 kernel: io scheduler bfq registered Oct 2 19:14:36.183280 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:14:36.183305 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:14:36.183322 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:14:36.183339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:14:36.183356 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:14:36.183575 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:14:36.183603 kernel: printk: console [ttyS0] disabled Oct 2 19:14:36.183620 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:14:36.183637 kernel: printk: console [ttyS0] enabled Oct 2 19:14:36.183653 kernel: printk: bootconsole [uart0] disabled Oct 2 19:14:36.183669 kernel: thunder_xcv, ver 1.0 Oct 2 19:14:36.183685 kernel: thunder_bgx, ver 1.0 Oct 2 19:14:36.183701 kernel: nicpf, ver 1.0 Oct 2 19:14:36.183717 kernel: nicvf, ver 1.0 Oct 2 19:14:36.183976 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:14:36.184213 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:14:35 UTC (1696274075) Oct 2 19:14:36.184239 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:14:36.184255 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:14:36.184272 kernel: Segment Routing with IPv6 Oct 2 19:14:36.184288 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:14:36.184304 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:14:36.184321 kernel: Key type dns_resolver registered Oct 2 19:14:36.184337 kernel: registered taskstats version 1 Oct 2 19:14:36.184359 kernel: Loading compiled-in X.509 certificates Oct 2 19:14:36.184376 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:14:36.184392 kernel: Key type .fscrypt registered Oct 2 19:14:36.184408 kernel: Key type fscrypt-provisioning registered Oct 2 19:14:36.184424 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:14:36.184440 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:14:36.184456 kernel: ima: No architecture policies found Oct 2 19:14:36.184473 kernel: Freeing unused kernel memory: 34560K Oct 2 19:14:36.184489 kernel: Run /init as init process Oct 2 19:14:36.184510 kernel: with arguments: Oct 2 19:14:36.184526 kernel: /init Oct 2 19:14:36.184542 kernel: with environment: Oct 2 19:14:36.184558 kernel: HOME=/ Oct 2 19:14:36.184574 kernel: TERM=linux Oct 2 19:14:36.184590 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:14:36.184612 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:14:36.187060 systemd[1]: Detected virtualization amazon. Oct 2 19:14:36.187088 systemd[1]: Detected architecture arm64. Oct 2 19:14:36.187107 systemd[1]: Running in initrd. Oct 2 19:14:36.187315 systemd[1]: No hostname configured, using default hostname. Oct 2 19:14:36.187649 systemd[1]: Hostname set to . Oct 2 19:14:36.187938 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:14:36.187960 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:14:36.187978 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:14:36.187995 systemd[1]: Reached target cryptsetup.target. Oct 2 19:14:36.188019 systemd[1]: Reached target paths.target. Oct 2 19:14:36.188037 systemd[1]: Reached target slices.target. Oct 2 19:14:36.188054 systemd[1]: Reached target swap.target. Oct 2 19:14:36.188071 systemd[1]: Reached target timers.target. Oct 2 19:14:36.188089 systemd[1]: Listening on iscsid.socket. Oct 2 19:14:36.188107 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:14:36.188124 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:14:36.188142 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:14:36.188163 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:14:36.188181 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:14:36.188199 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:14:36.188217 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:14:36.188234 systemd[1]: Reached target sockets.target. Oct 2 19:14:36.188252 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:14:36.188270 systemd[1]: Finished network-cleanup.service. Oct 2 19:14:36.188287 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:14:36.188305 systemd[1]: Starting systemd-journald.service... Oct 2 19:14:36.188327 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:14:36.188345 systemd[1]: Starting systemd-resolved.service... Oct 2 19:14:36.188362 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:14:36.188380 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:14:36.188398 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:14:36.188415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:14:36.188433 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:14:36.188452 kernel: audit: type=1130 audit(1696274076.171:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.188475 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:14:36.188496 systemd-journald[309]: Journal started Oct 2 19:14:36.188592 systemd-journald[309]: Runtime Journal (/run/log/journal/ec23feb65be95b9caff5e763dae38a54) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:14:36.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.134751 systemd-modules-load[310]: Inserted module 'overlay' Oct 2 19:14:36.196153 systemd[1]: Started systemd-journald.service. Oct 2 19:14:36.218128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:14:36.218186 kernel: audit: type=1130 audit(1696274076.207:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.208981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:14:36.226823 kernel: audit: type=1130 audit(1696274076.207:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.235015 kernel: Bridge firewalling registered Oct 2 19:14:36.236002 systemd-modules-load[310]: Inserted module 'br_netfilter' Oct 2 19:14:36.256942 kernel: SCSI subsystem initialized Oct 2 19:14:36.268013 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:14:36.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.280434 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:14:36.282086 kernel: audit: type=1130 audit(1696274076.268:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.303774 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:14:36.303841 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:14:36.313005 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:14:36.319790 dracut-cmdline[327]: dracut-dracut-053 Oct 2 19:14:36.324083 systemd-modules-load[310]: Inserted module 'dm_multipath' Oct 2 19:14:36.327944 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:14:36.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.328060 systemd-resolved[311]: Positive Trust Anchors: Oct 2 19:14:36.329798 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:14:36.329863 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:14:36.356388 kernel: audit: type=1130 audit(1696274076.328:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.356574 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:14:36.341831 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:14:36.412403 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:14:36.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.424958 kernel: audit: type=1130 audit(1696274076.412:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.628988 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:14:36.643964 kernel: iscsi: registered transport (tcp) Oct 2 19:14:36.670785 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:14:36.670864 kernel: QLogic iSCSI HBA Driver Oct 2 19:14:36.885629 systemd-resolved[311]: Defaulting to hostname 'linux'. Oct 2 19:14:36.888286 kernel: random: crng init done Oct 2 19:14:36.892008 systemd[1]: Started systemd-resolved.service. Oct 2 19:14:36.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.894011 systemd[1]: Reached target nss-lookup.target. Oct 2 19:14:36.906938 kernel: audit: type=1130 audit(1696274076.891:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.957206 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:14:36.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:36.960251 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:14:36.970959 kernel: audit: type=1130 audit(1696274076.955:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:37.054955 kernel: raid6: neonx8 gen() 6332 MB/s Oct 2 19:14:37.072955 kernel: raid6: neonx8 xor() 4660 MB/s Oct 2 19:14:37.090953 kernel: raid6: neonx4 gen() 6437 MB/s Oct 2 19:14:37.108946 kernel: raid6: neonx4 xor() 4851 MB/s Oct 2 19:14:37.126946 kernel: raid6: neonx2 gen() 5687 MB/s Oct 2 19:14:37.144954 kernel: raid6: neonx2 xor() 4439 MB/s Oct 2 19:14:37.162957 kernel: raid6: neonx1 gen() 4431 MB/s Oct 2 19:14:37.180967 kernel: raid6: neonx1 xor() 3613 MB/s Oct 2 19:14:37.198960 kernel: raid6: int64x8 gen() 3360 MB/s Oct 2 19:14:37.216962 kernel: raid6: int64x8 xor() 2072 MB/s Oct 2 19:14:37.234967 kernel: raid6: int64x4 gen() 3733 MB/s Oct 2 19:14:37.252963 kernel: raid6: int64x4 xor() 2165 MB/s Oct 2 19:14:37.270965 kernel: raid6: int64x2 gen() 3544 MB/s Oct 2 19:14:37.288958 kernel: raid6: int64x2 xor() 1927 MB/s Oct 2 19:14:37.306960 kernel: raid6: int64x1 gen() 2734 MB/s Oct 2 19:14:37.326609 kernel: raid6: int64x1 xor() 1437 MB/s Oct 2 19:14:37.326676 kernel: raid6: using algorithm neonx4 gen() 6437 MB/s Oct 2 19:14:37.326702 kernel: raid6: .... xor() 4851 MB/s, rmw enabled Oct 2 19:14:37.328494 kernel: raid6: using neon recovery algorithm Oct 2 19:14:37.347948 kernel: xor: measuring software checksum speed Oct 2 19:14:37.350944 kernel: 8regs : 9360 MB/sec Oct 2 19:14:37.351002 kernel: 32regs : 11103 MB/sec Oct 2 19:14:37.357239 kernel: arm64_neon : 9608 MB/sec Oct 2 19:14:37.357312 kernel: xor: using function: 32regs (11103 MB/sec) Oct 2 19:14:37.450964 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:14:37.492640 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:14:37.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:37.495000 audit: BPF prog-id=7 op=LOAD Oct 2 19:14:37.502000 audit: BPF prog-id=8 op=LOAD Oct 2 19:14:37.505384 kernel: audit: type=1130 audit(1696274077.493:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:37.504857 systemd[1]: Starting systemd-udevd.service... Oct 2 19:14:37.543358 systemd-udevd[508]: Using default interface naming scheme 'v252'. Oct 2 19:14:37.554972 systemd[1]: Started systemd-udevd.service. Oct 2 19:14:37.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:37.572377 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:14:37.632630 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Oct 2 19:14:37.754455 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:14:37.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:37.759669 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:14:37.884025 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:14:37.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:38.032645 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:14:38.032730 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:14:38.041515 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:14:38.041846 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:14:38.055007 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:07:ef:6e:6b:fb Oct 2 19:14:38.057040 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:14:38.085114 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:14:38.085189 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:14:38.094935 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:14:38.101801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:14:38.101872 kernel: GPT:9289727 != 16777215 Oct 2 19:14:38.104171 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:14:38.105584 kernel: GPT:9289727 != 16777215 Oct 2 19:14:38.107610 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:14:38.109291 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:38.197002 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (569) Oct 2 19:14:38.292657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:14:38.410741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:14:38.425821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:14:38.431963 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:14:38.438741 systemd[1]: Starting disk-uuid.service... Oct 2 19:14:38.461991 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:14:38.478967 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:38.485981 disk-uuid[669]: Primary Header is updated. Oct 2 19:14:38.485981 disk-uuid[669]: Secondary Entries is updated. Oct 2 19:14:38.485981 disk-uuid[669]: Secondary Header is updated. Oct 2 19:14:39.507980 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:14:39.509094 disk-uuid[670]: The operation has completed successfully. Oct 2 19:14:39.815419 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:14:39.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:39.815658 systemd[1]: Finished disk-uuid.service. Oct 2 19:14:39.837338 kernel: kauditd_printk_skb: 5 callbacks suppressed Oct 2 19:14:39.837385 kernel: audit: type=1130 audit(1696274079.817:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:39.837416 kernel: audit: type=1131 audit(1696274079.818:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:39.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:39.828602 systemd[1]: Starting verity-setup.service... Oct 2 19:14:39.887956 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:14:39.991392 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:14:39.998373 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:14:40.013461 systemd[1]: Finished verity-setup.service. Oct 2 19:14:40.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.024993 kernel: audit: type=1130 audit(1696274080.015:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.103933 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:14:40.106276 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:14:40.109745 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:14:40.114307 systemd[1]: Starting ignition-setup.service... Oct 2 19:14:40.122012 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:14:40.161390 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:40.161472 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:40.161497 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:40.184962 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:40.218041 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:14:40.266290 systemd[1]: Finished ignition-setup.service. Oct 2 19:14:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.272178 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:14:40.280182 kernel: audit: type=1130 audit(1696274080.266:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.499985 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:14:40.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.509000 audit: BPF prog-id=9 op=LOAD Oct 2 19:14:40.514141 kernel: audit: type=1130 audit(1696274080.502:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.514202 kernel: audit: type=1334 audit(1696274080.509:21): prog-id=9 op=LOAD Oct 2 19:14:40.512324 systemd[1]: Starting systemd-networkd.service... Oct 2 19:14:40.571547 systemd-networkd[1016]: lo: Link UP Oct 2 19:14:40.571570 systemd-networkd[1016]: lo: Gained carrier Oct 2 19:14:40.575958 systemd-networkd[1016]: Enumeration completed Oct 2 19:14:40.591759 kernel: audit: type=1130 audit(1696274080.576:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.576147 systemd[1]: Started systemd-networkd.service. Oct 2 19:14:40.578102 systemd[1]: Reached target network.target. Oct 2 19:14:40.587673 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:14:40.588983 systemd[1]: Starting iscsiuio.service... Oct 2 19:14:40.604358 systemd-networkd[1016]: eth0: Link UP Oct 2 19:14:40.604381 systemd-networkd[1016]: eth0: Gained carrier Oct 2 19:14:40.615648 systemd[1]: Started iscsiuio.service. Oct 2 19:14:40.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.619724 systemd[1]: Starting iscsid.service... Oct 2 19:14:40.630721 kernel: audit: type=1130 audit(1696274080.616:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.633137 systemd-networkd[1016]: eth0: DHCPv4 address 172.31.21.101/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:14:40.640402 iscsid[1021]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:14:40.646966 iscsid[1021]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:14:40.646966 iscsid[1021]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:14:40.646966 iscsid[1021]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:14:40.646966 iscsid[1021]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:14:40.646966 iscsid[1021]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:14:40.685076 kernel: audit: type=1130 audit(1696274080.667:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.664677 systemd[1]: Started iscsid.service. Oct 2 19:14:40.684835 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:14:40.733220 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:14:40.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.736699 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:14:40.755290 kernel: audit: type=1130 audit(1696274080.734:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.745892 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:14:40.747949 systemd[1]: Reached target remote-fs.target. Oct 2 19:14:40.751325 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:14:40.788730 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:14:40.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:40.977430 ignition[938]: Ignition 2.14.0 Oct 2 19:14:40.977465 ignition[938]: Stage: fetch-offline Oct 2 19:14:40.977939 ignition[938]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:40.980656 ignition[938]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:41.001013 ignition[938]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:41.004089 ignition[938]: Ignition finished successfully Oct 2 19:14:41.007447 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:14:41.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.012312 systemd[1]: Starting ignition-fetch.service... Oct 2 19:14:41.043385 ignition[1040]: Ignition 2.14.0 Oct 2 19:14:41.043414 ignition[1040]: Stage: fetch Oct 2 19:14:41.043771 ignition[1040]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:41.043831 ignition[1040]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:41.059601 ignition[1040]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:41.062017 ignition[1040]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:41.070504 ignition[1040]: INFO : PUT result: OK Oct 2 19:14:41.074184 ignition[1040]: DEBUG : parsed url from cmdline: "" Oct 2 19:14:41.074184 ignition[1040]: INFO : no config URL provided Oct 2 19:14:41.074184 ignition[1040]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:14:41.080571 ignition[1040]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:14:41.080571 ignition[1040]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:41.080571 ignition[1040]: INFO : PUT result: OK Oct 2 19:14:41.080571 ignition[1040]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:14:41.094294 ignition[1040]: INFO : GET result: OK Oct 2 19:14:41.094294 ignition[1040]: DEBUG : parsing config with SHA512: 80c7ec0933ef15db9d8cac284848953f56c2f6c103b3d0e43a46a9745c6d4ad587ace63d0c9c9640054dede2120f38e55ae31e750b12a781ef77fdb6e8b4bb41 Oct 2 19:14:41.119509 unknown[1040]: fetched base config from "system" Oct 2 19:14:41.119542 unknown[1040]: fetched base config from "system" Oct 2 19:14:41.119561 unknown[1040]: fetched user config from "aws" Oct 2 19:14:41.125337 ignition[1040]: fetch: fetch complete Oct 2 19:14:41.125362 ignition[1040]: fetch: fetch passed Oct 2 19:14:41.128520 ignition[1040]: Ignition finished successfully Oct 2 19:14:41.132869 systemd[1]: Finished ignition-fetch.service. Oct 2 19:14:41.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.136852 systemd[1]: Starting ignition-kargs.service... Oct 2 19:14:41.170978 ignition[1046]: Ignition 2.14.0 Oct 2 19:14:41.171008 ignition[1046]: Stage: kargs Oct 2 19:14:41.171368 ignition[1046]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:41.171428 ignition[1046]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:41.187338 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:41.189817 ignition[1046]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:41.192734 ignition[1046]: INFO : PUT result: OK Oct 2 19:14:41.197821 ignition[1046]: kargs: kargs passed Oct 2 19:14:41.197963 ignition[1046]: Ignition finished successfully Oct 2 19:14:41.202382 systemd[1]: Finished ignition-kargs.service. Oct 2 19:14:41.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.208121 systemd[1]: Starting ignition-disks.service... Oct 2 19:14:41.238306 ignition[1052]: Ignition 2.14.0 Oct 2 19:14:41.238336 ignition[1052]: Stage: disks Oct 2 19:14:41.238723 ignition[1052]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:41.238782 ignition[1052]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:41.254102 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:41.256849 ignition[1052]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:41.260694 ignition[1052]: INFO : PUT result: OK Oct 2 19:14:41.265973 ignition[1052]: disks: disks passed Oct 2 19:14:41.266317 ignition[1052]: Ignition finished successfully Oct 2 19:14:41.270804 systemd[1]: Finished ignition-disks.service. Oct 2 19:14:41.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.274294 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:14:41.278005 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:14:41.281275 systemd[1]: Reached target local-fs.target. Oct 2 19:14:41.284389 systemd[1]: Reached target sysinit.target. Oct 2 19:14:41.287405 systemd[1]: Reached target basic.target. Oct 2 19:14:41.292022 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:14:41.341480 systemd-fsck[1060]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:14:41.351826 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:14:41.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.356613 systemd[1]: Mounting sysroot.mount... Oct 2 19:14:41.386956 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:14:41.389511 systemd[1]: Mounted sysroot.mount. Oct 2 19:14:41.390069 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:14:41.402483 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:14:41.405825 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:14:41.405959 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:14:41.406033 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:14:41.437082 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:14:41.442178 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:14:41.447163 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:14:41.470942 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1077) Oct 2 19:14:41.480231 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:41.480300 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:41.480325 initrd-setup-root[1082]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:14:41.485955 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:41.498947 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:41.502961 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:14:41.513004 initrd-setup-root[1108]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:14:41.530990 initrd-setup-root[1116]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:14:41.551051 initrd-setup-root[1124]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:14:41.786299 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:14:41.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.791454 systemd[1]: Starting ignition-mount.service... Oct 2 19:14:41.799483 systemd[1]: Starting sysroot-boot.service... Oct 2 19:14:41.830833 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:14:41.831067 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:14:41.864057 systemd[1]: Finished sysroot-boot.service. Oct 2 19:14:41.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.882207 ignition[1144]: INFO : Ignition 2.14.0 Oct 2 19:14:41.882207 ignition[1144]: INFO : Stage: mount Oct 2 19:14:41.886002 ignition[1144]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:41.886002 ignition[1144]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:41.901547 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:41.904147 ignition[1144]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:41.907513 ignition[1144]: INFO : PUT result: OK Oct 2 19:14:41.912739 ignition[1144]: INFO : mount: mount passed Oct 2 19:14:41.914474 ignition[1144]: INFO : Ignition finished successfully Oct 2 19:14:41.917419 systemd[1]: Finished ignition-mount.service. Oct 2 19:14:41.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:41.921955 systemd[1]: Starting ignition-files.service... Oct 2 19:14:41.945689 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:14:41.969963 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1152) Oct 2 19:14:41.975725 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:14:41.975782 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:14:41.978026 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:14:41.984933 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:14:41.990167 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:14:42.024726 ignition[1171]: INFO : Ignition 2.14.0 Oct 2 19:14:42.024726 ignition[1171]: INFO : Stage: files Oct 2 19:14:42.028291 ignition[1171]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:42.028291 ignition[1171]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:42.042975 ignition[1171]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:42.045798 ignition[1171]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:42.049588 ignition[1171]: INFO : PUT result: OK Oct 2 19:14:42.054706 ignition[1171]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:14:42.058756 ignition[1171]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:14:42.061684 ignition[1171]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:14:42.135424 ignition[1171]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:14:42.138534 ignition[1171]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:14:42.142459 unknown[1171]: wrote ssh authorized keys file for user: core Oct 2 19:14:42.144785 ignition[1171]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:14:42.148715 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:14:42.152742 ignition[1171]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:14:42.310735 ignition[1171]: INFO : GET result: OK Oct 2 19:14:42.496313 systemd-networkd[1016]: eth0: Gained IPv6LL Oct 2 19:14:42.777027 ignition[1171]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:14:42.782113 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:14:42.782113 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:14:42.782113 ignition[1171]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:14:42.871612 ignition[1171]: INFO : GET result: OK Oct 2 19:14:43.028448 ignition[1171]: DEBUG : file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:14:43.035099 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:14:43.035099 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:14:43.035099 ignition[1171]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:43.051577 ignition[1171]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem368761989" Oct 2 19:14:43.058620 ignition[1171]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem368761989": device or resource busy Oct 2 19:14:43.058620 ignition[1171]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem368761989", trying btrfs: device or resource busy Oct 2 19:14:43.058620 ignition[1171]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem368761989" Oct 2 19:14:43.068589 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1173) Oct 2 19:14:43.068640 ignition[1171]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem368761989" Oct 2 19:14:43.073013 ignition[1171]: INFO : op(3): [started] unmounting "/mnt/oem368761989" Oct 2 19:14:43.073013 ignition[1171]: INFO : op(3): [finished] unmounting "/mnt/oem368761989" Oct 2 19:14:43.073013 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:14:43.082469 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:14:43.082469 ignition[1171]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:14:43.097895 systemd[1]: mnt-oem368761989.mount: Deactivated successfully. Oct 2 19:14:43.177510 ignition[1171]: INFO : GET result: OK Oct 2 19:14:43.785711 ignition[1171]: DEBUG : file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:14:43.790873 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:14:43.790873 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:14:43.790873 ignition[1171]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:14:43.839126 ignition[1171]: INFO : GET result: OK Oct 2 19:14:45.665353 ignition[1171]: DEBUG : file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:14:45.670517 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:14:45.674106 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:14:45.677986 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:14:45.681624 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:14:45.685284 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:14:45.685284 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:14:45.692987 ignition[1171]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:45.707255 ignition[1171]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem956836945" Oct 2 19:14:45.710270 ignition[1171]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem956836945": device or resource busy Oct 2 19:14:45.710270 ignition[1171]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem956836945", trying btrfs: device or resource busy Oct 2 19:14:45.710270 ignition[1171]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem956836945" Oct 2 19:14:45.727806 ignition[1171]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem956836945" Oct 2 19:14:45.731461 ignition[1171]: INFO : op(6): [started] unmounting "/mnt/oem956836945" Oct 2 19:14:45.737498 systemd[1]: mnt-oem956836945.mount: Deactivated successfully. Oct 2 19:14:45.740342 ignition[1171]: INFO : op(6): [finished] unmounting "/mnt/oem956836945" Oct 2 19:14:45.742706 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:14:45.742706 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:14:45.750554 ignition[1171]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:45.764477 ignition[1171]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4249415795" Oct 2 19:14:45.767476 ignition[1171]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4249415795": device or resource busy Oct 2 19:14:45.767476 ignition[1171]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4249415795", trying btrfs: device or resource busy Oct 2 19:14:45.767476 ignition[1171]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4249415795" Oct 2 19:14:45.782027 ignition[1171]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4249415795" Oct 2 19:14:45.790729 ignition[1171]: INFO : op(9): [started] unmounting "/mnt/oem4249415795" Oct 2 19:14:45.793229 ignition[1171]: INFO : op(9): [finished] unmounting "/mnt/oem4249415795" Oct 2 19:14:45.796720 systemd[1]: mnt-oem4249415795.mount: Deactivated successfully. Oct 2 19:14:45.802381 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:14:45.806540 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:14:45.810681 ignition[1171]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:14:45.824535 ignition[1171]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1113270812" Oct 2 19:14:45.829726 ignition[1171]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1113270812": device or resource busy Oct 2 19:14:45.829726 ignition[1171]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1113270812", trying btrfs: device or resource busy Oct 2 19:14:45.829726 ignition[1171]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1113270812" Oct 2 19:14:45.829726 ignition[1171]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1113270812" Oct 2 19:14:45.829726 ignition[1171]: INFO : op(c): [started] unmounting "/mnt/oem1113270812" Oct 2 19:14:45.829726 ignition[1171]: INFO : op(c): [finished] unmounting "/mnt/oem1113270812" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(10): [started] processing unit "nvidia.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(10): [finished] processing unit "nvidia.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:14:45.829726 ignition[1171]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:14:45.933423 kernel: kauditd_printk_skb: 9 callbacks suppressed Oct 2 19:14:45.933468 kernel: audit: type=1130 audit(1696274085.888:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(15): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(15): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:14:45.933573 ignition[1171]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:14:45.933573 ignition[1171]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:14:45.933573 ignition[1171]: INFO : files: files passed Oct 2 19:14:45.933573 ignition[1171]: INFO : Ignition finished successfully Oct 2 19:14:45.886738 systemd[1]: Finished ignition-files.service. Oct 2 19:14:46.004615 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:14:46.008525 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:14:46.018632 systemd[1]: Starting ignition-quench.service... Oct 2 19:14:46.035688 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:14:46.037747 systemd[1]: Finished ignition-quench.service. Oct 2 19:14:46.056795 kernel: audit: type=1130 audit(1696274086.038:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.056836 kernel: audit: type=1131 audit(1696274086.038:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.071466 initrd-setup-root-after-ignition[1196]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:14:46.076875 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:14:46.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.081065 systemd[1]: Reached target ignition-complete.target. Oct 2 19:14:46.090649 kernel: audit: type=1130 audit(1696274086.079:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.094033 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:14:46.145410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:14:46.147569 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:14:46.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.151339 systemd[1]: Reached target initrd-fs.target. Oct 2 19:14:46.167412 kernel: audit: type=1130 audit(1696274086.150:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.167461 kernel: audit: type=1131 audit(1696274086.150:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.169331 systemd[1]: Reached target initrd.target. Oct 2 19:14:46.176743 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:14:46.180881 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:14:46.223546 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:14:46.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.228525 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:14:46.237413 kernel: audit: type=1130 audit(1696274086.221:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.258994 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:14:46.316640 kernel: audit: type=1131 audit(1696274086.261:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.316682 kernel: audit: type=1131 audit(1696274086.276:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.316707 kernel: audit: type=1131 audit(1696274086.286:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.259779 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:14:46.261126 systemd[1]: Stopped target timers.target. Oct 2 19:14:46.261399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:14:46.261614 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:14:46.270888 systemd[1]: Stopped target initrd.target. Oct 2 19:14:46.271448 systemd[1]: Stopped target basic.target. Oct 2 19:14:46.271774 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:14:46.272454 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:14:46.272784 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:14:46.273471 systemd[1]: Stopped target remote-fs.target. Oct 2 19:14:46.273803 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:14:46.276319 systemd[1]: Stopped target sysinit.target. Oct 2 19:14:46.276597 systemd[1]: Stopped target local-fs.target. Oct 2 19:14:46.277096 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:14:46.277598 systemd[1]: Stopped target swap.target. Oct 2 19:14:46.277870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:14:46.278172 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:14:46.286402 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:14:46.286614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:14:46.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.410895 ignition[1209]: INFO : Ignition 2.14.0 Oct 2 19:14:46.410895 ignition[1209]: INFO : Stage: umount Oct 2 19:14:46.410895 ignition[1209]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:14:46.410895 ignition[1209]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:14:46.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.286819 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:14:46.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.459262 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:14:46.459262 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:14:46.459262 ignition[1209]: INFO : PUT result: OK Oct 2 19:14:46.459262 ignition[1209]: INFO : umount: umount passed Oct 2 19:14:46.459262 ignition[1209]: INFO : Ignition finished successfully Oct 2 19:14:46.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.287805 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:14:46.288042 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:14:46.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.288414 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:14:46.288601 systemd[1]: Stopped ignition-files.service. Oct 2 19:14:46.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.515000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:14:46.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.323781 systemd[1]: Stopping ignition-mount.service... Oct 2 19:14:46.337939 systemd[1]: Stopping iscsiuio.service... Oct 2 19:14:46.366687 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:14:46.392961 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:14:46.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.393289 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:14:46.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.410813 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:14:46.411140 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:14:46.423899 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:14:46.426267 systemd[1]: Stopped iscsiuio.service. Oct 2 19:14:46.450457 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:14:46.450659 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:14:46.457030 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:14:46.457231 systemd[1]: Stopped ignition-mount.service. Oct 2 19:14:46.459345 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:14:46.460426 systemd[1]: Stopped ignition-disks.service. Oct 2 19:14:46.466371 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:14:46.467445 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:14:46.470693 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:14:46.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.470789 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:14:46.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.473106 systemd[1]: Stopped target network.target. Oct 2 19:14:46.474807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:14:46.474949 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:14:46.480109 systemd[1]: Stopped target paths.target. Oct 2 19:14:46.481656 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:14:46.485546 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:14:46.487523 systemd[1]: Stopped target slices.target. Oct 2 19:14:46.489094 systemd[1]: Stopped target sockets.target. Oct 2 19:14:46.490837 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:14:46.490898 systemd[1]: Closed iscsid.socket. Oct 2 19:14:46.492425 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:14:46.492500 systemd[1]: Closed iscsiuio.socket. Oct 2 19:14:46.494031 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:14:46.494129 systemd[1]: Stopped ignition-setup.service. Oct 2 19:14:46.496197 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:14:46.497978 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:14:46.507266 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:14:46.507476 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:14:46.508989 systemd-networkd[1016]: eth0: DHCPv6 lease lost Oct 2 19:14:46.605000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:14:46.514205 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:14:46.514439 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:14:46.517267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:14:46.517360 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:14:46.521705 systemd[1]: Stopping network-cleanup.service... Oct 2 19:14:46.532494 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:14:46.533765 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:14:46.537170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:14:46.537282 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:14:46.540006 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:14:46.540100 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:14:46.543740 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:14:46.565270 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:14:46.565613 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:14:46.584152 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:14:46.584883 systemd[1]: Stopped network-cleanup.service. Oct 2 19:14:46.590372 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:14:46.595778 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:14:46.600840 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:14:46.600984 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:14:46.654232 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:14:46.654357 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:14:46.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.659474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:14:46.659594 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:14:46.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.664778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:14:46.664957 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:14:46.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.671642 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:14:46.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.678107 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:14:46.678253 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:14:46.680615 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:14:46.680721 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:14:46.682660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:14:46.682760 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:14:46.712158 systemd[1]: mnt-oem1113270812.mount: Deactivated successfully. Oct 2 19:14:46.712322 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:14:46.712444 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:14:46.712551 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:14:46.714124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:14:46.738264 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:14:46.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.743062 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:14:46.745256 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:14:46.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.749077 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:14:46.752795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:14:46.755001 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:14:46.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:46.759991 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:14:46.791064 systemd[1]: Switching root. Oct 2 19:14:46.818629 iscsid[1021]: iscsid shutting down. Oct 2 19:14:46.820376 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Oct 2 19:14:46.820455 systemd-journald[309]: Journal stopped Oct 2 19:14:52.804567 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:14:52.810335 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:14:52.812300 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:14:52.812340 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:14:52.812379 kernel: SELinux: policy capability open_perms=1 Oct 2 19:14:52.812477 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:14:52.812513 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:14:52.812546 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:14:52.812577 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:14:52.812606 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:14:52.812638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:14:52.812672 systemd[1]: Successfully loaded SELinux policy in 114.289ms. Oct 2 19:14:52.812842 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.670ms. Oct 2 19:14:52.812888 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:14:52.815979 systemd[1]: Detected virtualization amazon. Oct 2 19:14:52.816031 systemd[1]: Detected architecture arm64. Oct 2 19:14:52.816069 systemd[1]: Detected first boot. Oct 2 19:14:52.816105 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:14:52.816140 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:14:52.816177 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:14:52.816222 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:14:52.816256 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:14:52.816354 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 19:14:52.816390 kernel: audit: type=1334 audit(1696274092.276:84): prog-id=12 op=LOAD Oct 2 19:14:52.816420 kernel: audit: type=1334 audit(1696274092.279:85): prog-id=3 op=UNLOAD Oct 2 19:14:52.816449 kernel: audit: type=1334 audit(1696274092.281:86): prog-id=13 op=LOAD Oct 2 19:14:52.816477 kernel: audit: type=1334 audit(1696274092.284:87): prog-id=14 op=LOAD Oct 2 19:14:52.816508 kernel: audit: type=1334 audit(1696274092.284:88): prog-id=4 op=UNLOAD Oct 2 19:14:52.816544 kernel: audit: type=1334 audit(1696274092.284:89): prog-id=5 op=UNLOAD Oct 2 19:14:52.816576 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:14:52.816609 kernel: audit: type=1334 audit(1696274092.286:90): prog-id=15 op=LOAD Oct 2 19:14:52.818988 systemd[1]: Stopped iscsid.service. Oct 2 19:14:52.819050 kernel: audit: type=1334 audit(1696274092.286:91): prog-id=12 op=UNLOAD Oct 2 19:14:52.819086 kernel: audit: type=1334 audit(1696274092.289:92): prog-id=16 op=LOAD Oct 2 19:14:52.819120 kernel: audit: type=1334 audit(1696274092.292:93): prog-id=17 op=LOAD Oct 2 19:14:52.819156 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:14:52.819192 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:14:52.819234 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:14:52.819269 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:14:52.819301 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:14:52.819337 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:14:52.819371 systemd[1]: Created slice system-getty.slice. Oct 2 19:14:52.819472 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:14:52.819508 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:14:52.819549 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:14:52.819582 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:14:52.819617 systemd[1]: Created slice user.slice. Oct 2 19:14:52.819648 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:14:52.819678 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:14:52.819710 systemd[1]: Set up automount boot.automount. Oct 2 19:14:52.819740 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:14:52.819770 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:14:52.819803 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:14:52.819838 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:14:52.819868 systemd[1]: Reached target integritysetup.target. Oct 2 19:14:52.819898 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:14:52.819957 systemd[1]: Reached target remote-fs.target. Oct 2 19:14:52.819994 systemd[1]: Reached target slices.target. Oct 2 19:14:52.820024 systemd[1]: Reached target swap.target. Oct 2 19:14:52.820054 systemd[1]: Reached target torcx.target. Oct 2 19:14:52.820086 systemd[1]: Reached target veritysetup.target. Oct 2 19:14:52.820116 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:14:52.820148 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:14:52.820186 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:14:52.820219 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:14:52.820250 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:14:52.820283 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:14:52.820313 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:14:52.820345 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:14:52.820375 systemd[1]: Mounting media.mount... Oct 2 19:14:52.820404 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:14:52.820434 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:14:52.820467 systemd[1]: Mounting tmp.mount... Oct 2 19:14:52.820497 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:14:52.820528 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:14:52.820558 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:14:52.820588 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:14:52.820618 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:14:52.820719 systemd[1]: Starting modprobe@drm.service... Oct 2 19:14:52.820751 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:14:52.820782 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:14:52.820820 systemd[1]: Starting modprobe@loop.service... Oct 2 19:14:52.820852 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:14:52.820885 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:14:52.820934 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:14:52.820970 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:14:52.821004 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:14:52.821035 systemd[1]: Stopped systemd-journald.service. Oct 2 19:14:52.821066 systemd[1]: Starting systemd-journald.service... Oct 2 19:14:52.821098 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:14:52.821136 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:14:52.821168 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:14:52.821199 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:14:52.821232 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:14:52.821265 systemd[1]: Stopped verity-setup.service. Oct 2 19:14:52.821299 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:14:52.821332 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:14:52.821362 systemd[1]: Mounted media.mount. Oct 2 19:14:52.821392 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:14:52.821862 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:14:52.827999 systemd[1]: Mounted tmp.mount. Oct 2 19:14:52.828042 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:14:52.828075 kernel: fuse: init (API version 7.34) Oct 2 19:14:52.828106 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:14:52.828136 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:14:52.828169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:14:52.828208 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:14:52.828241 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:14:52.828273 systemd[1]: Finished modprobe@drm.service. Oct 2 19:14:52.828373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:14:52.828408 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:14:52.828438 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:14:52.828468 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:14:52.828503 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:14:52.828532 kernel: loop: module loaded Oct 2 19:14:52.828561 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:14:52.828591 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:14:52.828621 systemd[1]: Finished modprobe@loop.service. Oct 2 19:14:52.828652 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:14:52.828687 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:14:52.828723 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:14:52.828756 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:14:52.828785 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:14:52.828817 systemd[1]: Reached target network-pre.target. Oct 2 19:14:52.828860 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:14:52.828898 systemd-journald[1316]: Journal started Oct 2 19:14:52.829040 systemd-journald[1316]: Runtime Journal (/run/log/journal/ec23feb65be95b9caff5e763dae38a54) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:14:47.607000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:14:47.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:14:47.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:14:47.771000 audit: BPF prog-id=10 op=LOAD Oct 2 19:14:47.771000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:14:47.771000 audit: BPF prog-id=11 op=LOAD Oct 2 19:14:47.771000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:14:52.276000 audit: BPF prog-id=12 op=LOAD Oct 2 19:14:52.279000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:14:52.281000 audit: BPF prog-id=13 op=LOAD Oct 2 19:14:52.284000 audit: BPF prog-id=14 op=LOAD Oct 2 19:14:52.284000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:14:52.284000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:14:52.286000 audit: BPF prog-id=15 op=LOAD Oct 2 19:14:52.286000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:14:52.289000 audit: BPF prog-id=16 op=LOAD Oct 2 19:14:52.292000 audit: BPF prog-id=17 op=LOAD Oct 2 19:14:52.292000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:14:52.292000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:14:52.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.306000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:14:52.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.609000 audit: BPF prog-id=18 op=LOAD Oct 2 19:14:52.609000 audit: BPF prog-id=19 op=LOAD Oct 2 19:14:52.609000 audit: BPF prog-id=20 op=LOAD Oct 2 19:14:52.609000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:14:52.609000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:14:52.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.795000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:14:52.795000 audit[1316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc05e85b0 a2=4000 a3=1 items=0 ppid=1 pid=1316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:52.843100 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:14:52.843161 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:14:52.795000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:14:52.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.273843 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:14:47.986037 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:14:52.294237 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:14:47.996448 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:14:47.996504 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:14:47.996575 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:14:52.853020 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:14:52.853111 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:14:47.996605 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:14:47.996690 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:14:47.996723 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:14:47.997216 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:14:47.997308 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:14:47.997345 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:14:47.998430 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:14:47.998534 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:14:47.998588 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:14:47.998632 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:14:47.998684 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:14:47.998726 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:14:52.878983 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:14:52.879079 systemd[1]: Started systemd-journald.service. Oct 2 19:14:52.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.872602 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:14:51.402564 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:51Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:51.403152 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:51Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:51.403428 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:51Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:51.403881 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:51Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:14:51.404039 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:51Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:14:51.404192 /usr/lib/systemd/system-generators/torcx-generator[1242]: time="2023-10-02T19:14:51Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:14:52.913480 systemd-journald[1316]: Time spent on flushing to /var/log/journal/ec23feb65be95b9caff5e763dae38a54 is 90.889ms for 1142 entries. Oct 2 19:14:52.913480 systemd-journald[1316]: System Journal (/var/log/journal/ec23feb65be95b9caff5e763dae38a54) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:14:53.035389 systemd-journald[1316]: Received client request to flush runtime journal. Oct 2 19:14:52.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:52.940893 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:14:52.943200 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:14:52.986987 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:14:53.041319 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:14:53.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.111754 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:14:53.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.118686 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:14:53.146958 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:14:53.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.152357 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:14:53.155132 udevadm[1358]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:14:53.269039 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:14:53.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.273420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:14:53.385578 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:14:53.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.996874 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:14:53.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:53.998000 audit: BPF prog-id=21 op=LOAD Oct 2 19:14:53.998000 audit: BPF prog-id=22 op=LOAD Oct 2 19:14:53.998000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:14:53.998000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:14:54.001835 systemd[1]: Starting systemd-udevd.service... Oct 2 19:14:54.054559 systemd-udevd[1363]: Using default interface naming scheme 'v252'. Oct 2 19:14:54.167670 systemd[1]: Started systemd-udevd.service. Oct 2 19:14:54.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.171000 audit: BPF prog-id=23 op=LOAD Oct 2 19:14:54.175143 systemd[1]: Starting systemd-networkd.service... Oct 2 19:14:54.196000 audit: BPF prog-id=24 op=LOAD Oct 2 19:14:54.196000 audit: BPF prog-id=25 op=LOAD Oct 2 19:14:54.197000 audit: BPF prog-id=26 op=LOAD Oct 2 19:14:54.202854 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:14:54.280151 (udev-worker)[1367]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:14:54.298068 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:14:54.355096 systemd[1]: Started systemd-userdbd.service. Oct 2 19:14:54.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.542962 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1380) Oct 2 19:14:54.552690 systemd-networkd[1368]: lo: Link UP Oct 2 19:14:54.552714 systemd-networkd[1368]: lo: Gained carrier Oct 2 19:14:54.553654 systemd-networkd[1368]: Enumeration completed Oct 2 19:14:54.553833 systemd[1]: Started systemd-networkd.service. Oct 2 19:14:54.553947 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:14:54.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.559971 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:14:54.568940 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:14:54.568989 systemd-networkd[1368]: eth0: Link UP Oct 2 19:14:54.569273 systemd-networkd[1368]: eth0: Gained carrier Oct 2 19:14:54.642441 systemd-networkd[1368]: eth0: DHCPv4 address 172.31.21.101/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:14:54.847575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:14:54.850593 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:14:54.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.855494 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:14:54.908055 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:14:54.944246 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:14:54.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:54.946555 systemd[1]: Reached target cryptsetup.target. Oct 2 19:14:54.951414 systemd[1]: Starting lvm2-activation.service... Oct 2 19:14:54.966246 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:14:55.000301 systemd[1]: Finished lvm2-activation.service. Oct 2 19:14:55.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.002423 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:14:55.004341 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:14:55.004404 systemd[1]: Reached target local-fs.target. Oct 2 19:14:55.006188 systemd[1]: Reached target machines.target. Oct 2 19:14:55.010800 systemd[1]: Starting ldconfig.service... Oct 2 19:14:55.014821 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:14:55.015037 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:14:55.017667 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:14:55.021670 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:14:55.027694 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:14:55.029854 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:14:55.030034 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:14:55.032666 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:14:55.078571 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1480 (bootctl) Oct 2 19:14:55.081625 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:14:55.105313 systemd-tmpfiles[1483]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:14:55.111716 systemd-tmpfiles[1483]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:14:55.114988 systemd-tmpfiles[1483]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:14:55.118044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:14:55.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.265733 systemd-fsck[1489]: fsck.fat 4.2 (2021-01-31) Oct 2 19:14:55.265733 systemd-fsck[1489]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:14:55.270892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:14:55.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.276222 systemd[1]: Mounting boot.mount... Oct 2 19:14:55.317282 systemd[1]: Mounted boot.mount. Oct 2 19:14:55.344895 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:14:55.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.557259 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:14:55.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.562039 systemd[1]: Starting audit-rules.service... Oct 2 19:14:55.566849 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:14:55.575697 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:14:55.577000 audit: BPF prog-id=27 op=LOAD Oct 2 19:14:55.585217 systemd[1]: Starting systemd-resolved.service... Oct 2 19:14:55.588000 audit: BPF prog-id=28 op=LOAD Oct 2 19:14:55.595199 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:14:55.599429 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:14:55.674584 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:14:55.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.677185 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:14:55.708000 audit[1510]: SYSTEM_BOOT pid=1510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.721667 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:14:55.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.800039 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:14:55.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:55.802172 systemd[1]: Reached target time-set.target. Oct 2 19:14:55.843250 systemd-resolved[1508]: Positive Trust Anchors: Oct 2 19:14:55.843274 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:14:55.843326 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:14:55.994267 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:14:55.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:56.128164 systemd-networkd[1368]: eth0: Gained IPv6LL Oct 2 19:14:56.131880 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:14:56.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:14:56.134000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:14:56.134000 audit[1526]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdd3adc60 a2=420 a3=0 items=0 ppid=1505 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:56.134000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:14:56.137088 augenrules[1526]: No rules Oct 2 19:14:56.138506 systemd-resolved[1508]: Defaulting to hostname 'linux'. Oct 2 19:14:56.139736 systemd[1]: Finished audit-rules.service. Oct 2 19:14:56.145059 systemd[1]: Started systemd-resolved.service. Oct 2 19:14:56.147075 systemd[1]: Reached target network.target. Oct 2 19:14:56.148900 systemd[1]: Reached target network-online.target. Oct 2 19:14:56.150720 systemd[1]: Reached target nss-lookup.target. Oct 2 19:14:56.169895 systemd-timesyncd[1509]: Contacted time server 206.82.28.3:123 (0.flatcar.pool.ntp.org). Oct 2 19:14:56.170153 systemd-timesyncd[1509]: Initial clock synchronization to Mon 2023-10-02 19:14:56.219018 UTC. Oct 2 19:14:56.209134 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:14:56.210267 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:14:56.609231 ldconfig[1479]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:14:56.614973 systemd[1]: Finished ldconfig.service. Oct 2 19:14:56.619129 systemd[1]: Starting systemd-update-done.service... Oct 2 19:14:56.641486 systemd[1]: Finished systemd-update-done.service. Oct 2 19:14:56.646052 systemd[1]: Reached target sysinit.target. Oct 2 19:14:56.647874 systemd[1]: Started motdgen.path. Oct 2 19:14:56.649478 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:14:56.652053 systemd[1]: Started logrotate.timer. Oct 2 19:14:56.653973 systemd[1]: Started mdadm.timer. Oct 2 19:14:56.655496 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:14:56.657279 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:14:56.657330 systemd[1]: Reached target paths.target. Oct 2 19:14:56.658839 systemd[1]: Reached target timers.target. Oct 2 19:14:56.669016 systemd[1]: Listening on dbus.socket. Oct 2 19:14:56.672830 systemd[1]: Starting docker.socket... Oct 2 19:14:56.681739 systemd[1]: Listening on sshd.socket. Oct 2 19:14:56.683780 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:14:56.684822 systemd[1]: Listening on docker.socket. Oct 2 19:14:56.686818 systemd[1]: Reached target sockets.target. Oct 2 19:14:56.688785 systemd[1]: Reached target basic.target. Oct 2 19:14:56.690558 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:14:56.690743 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:14:56.692940 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:14:56.713726 systemd[1]: Starting containerd.service... Oct 2 19:14:56.725386 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:14:56.737216 systemd[1]: Starting dbus.service... Oct 2 19:14:56.740732 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:14:56.746112 systemd[1]: Starting extend-filesystems.service... Oct 2 19:14:56.749131 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:14:56.751557 systemd[1]: Starting motdgen.service... Oct 2 19:14:56.755367 systemd[1]: Started nvidia.service. Oct 2 19:14:56.760596 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:14:56.765405 systemd[1]: Starting prepare-critools.service... Oct 2 19:14:56.770571 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:14:56.782495 systemd[1]: Starting sshd-keygen.service... Oct 2 19:14:56.799488 systemd[1]: Starting systemd-logind.service... Oct 2 19:14:56.802157 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:14:56.802301 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:14:56.803302 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:14:56.809007 systemd[1]: Starting update-engine.service... Oct 2 19:14:56.817615 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:14:56.972884 tar[1553]: crictl Oct 2 19:14:56.985978 jq[1538]: false Oct 2 19:14:56.990874 amazon-ssm-agent[1534]: 2023/10/02 19:14:56 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:14:56.995825 jq[1548]: true Oct 2 19:14:57.009033 tar[1560]: ./ Oct 2 19:14:57.009033 tar[1560]: ./macvlan Oct 2 19:14:57.013804 amazon-ssm-agent[1534]: Initializing new seelog logger Oct 2 19:14:57.013804 amazon-ssm-agent[1534]: New Seelog Logger Creation Complete Oct 2 19:14:57.013804 amazon-ssm-agent[1534]: 2023/10/02 19:14:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:14:57.013804 amazon-ssm-agent[1534]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:14:57.017279 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:14:57.017666 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:14:57.022239 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:14:57.022589 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:14:57.034937 amazon-ssm-agent[1534]: 2023/10/02 19:14:57 processing appconfig overrides Oct 2 19:14:57.101332 update_engine[1547]: I1002 19:14:57.099973 1547 main.cc:92] Flatcar Update Engine starting Oct 2 19:14:57.109151 dbus-daemon[1537]: [system] SELinux support is enabled Oct 2 19:14:57.115901 systemd[1]: Started dbus.service. Oct 2 19:14:57.120776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:14:57.120819 systemd[1]: Reached target system-config.target. Oct 2 19:14:57.123804 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:14:57.123848 systemd[1]: Reached target user-config.target. Oct 2 19:14:57.133180 jq[1562]: true Oct 2 19:14:57.151166 extend-filesystems[1539]: Found nvme0n1 Oct 2 19:14:57.151166 extend-filesystems[1539]: Found nvme0n1p1 Oct 2 19:14:57.151166 extend-filesystems[1539]: Found nvme0n1p2 Oct 2 19:14:57.156361 extend-filesystems[1539]: Found nvme0n1p3 Oct 2 19:14:57.156361 extend-filesystems[1539]: Found usr Oct 2 19:14:57.156361 extend-filesystems[1539]: Found nvme0n1p4 Oct 2 19:14:57.156361 extend-filesystems[1539]: Found nvme0n1p6 Oct 2 19:14:57.156361 extend-filesystems[1539]: Found nvme0n1p7 Oct 2 19:14:57.156361 extend-filesystems[1539]: Found nvme0n1p9 Oct 2 19:14:57.156361 extend-filesystems[1539]: Checking size of /dev/nvme0n1p9 Oct 2 19:14:57.172298 dbus-daemon[1537]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1368 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:14:57.178030 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:14:57.193148 systemd[1]: Started update-engine.service. Oct 2 19:14:57.200578 update_engine[1547]: I1002 19:14:57.193207 1547 update_check_scheduler.cc:74] Next update check in 3m37s Oct 2 19:14:57.198047 systemd[1]: Started locksmithd.service. Oct 2 19:14:57.266995 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:14:57.267390 systemd[1]: Finished motdgen.service. Oct 2 19:14:57.314152 extend-filesystems[1539]: Resized partition /dev/nvme0n1p9 Oct 2 19:14:57.337014 extend-filesystems[1602]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:14:57.352942 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:14:57.403951 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:14:57.466212 extend-filesystems[1602]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:14:57.466212 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:14:57.466212 extend-filesystems[1602]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:14:57.474142 extend-filesystems[1539]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:14:57.484820 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:14:57.485190 systemd[1]: Finished extend-filesystems.service. Oct 2 19:14:57.517094 bash[1614]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:14:57.518974 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:14:57.529746 tar[1560]: ./static Oct 2 19:14:57.537699 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:14:57.538110 systemd-logind[1546]: New seat seat0. Oct 2 19:14:57.550773 systemd[1]: Started systemd-logind.service. Oct 2 19:14:57.636185 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:14:57.665563 env[1566]: time="2023-10-02T19:14:57.665486851Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:14:57.701257 dbus-daemon[1537]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:14:57.701507 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:14:57.705489 dbus-daemon[1537]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1584 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:14:57.710156 systemd[1]: Starting polkit.service... Oct 2 19:14:57.770013 tar[1560]: ./vlan Oct 2 19:14:57.801722 polkitd[1624]: Started polkitd version 121 Oct 2 19:14:57.836395 polkitd[1624]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:14:57.836515 polkitd[1624]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:14:57.846137 polkitd[1624]: Finished loading, compiling and executing 2 rules Oct 2 19:14:57.846901 dbus-daemon[1537]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:14:57.847184 systemd[1]: Started polkit.service. Oct 2 19:14:57.852214 polkitd[1624]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:14:57.860220 env[1566]: time="2023-10-02T19:14:57.860029783Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:14:57.865270 env[1566]: time="2023-10-02T19:14:57.865218040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.886630964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.886699348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.887097687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.887137265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.887168394Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.887192866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.887377289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.887955742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.888214182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:14:57.888527 env[1566]: time="2023-10-02T19:14:57.888254423Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:14:57.889165 env[1566]: time="2023-10-02T19:14:57.888386496Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:14:57.889165 env[1566]: time="2023-10-02T19:14:57.888413652Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:14:57.898852 env[1566]: time="2023-10-02T19:14:57.898667049Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:14:57.898852 env[1566]: time="2023-10-02T19:14:57.898733230Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:14:57.898852 env[1566]: time="2023-10-02T19:14:57.898766838Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:14:57.899212 env[1566]: time="2023-10-02T19:14:57.899166332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.899596 env[1566]: time="2023-10-02T19:14:57.899563105Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.899757 env[1566]: time="2023-10-02T19:14:57.899727114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.899976 env[1566]: time="2023-10-02T19:14:57.899945302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.900587 env[1566]: time="2023-10-02T19:14:57.900545084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.900747 env[1566]: time="2023-10-02T19:14:57.900717819Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.900891 env[1566]: time="2023-10-02T19:14:57.900862123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.901070 env[1566]: time="2023-10-02T19:14:57.901037494Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.901221 env[1566]: time="2023-10-02T19:14:57.901192125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:14:57.901563 env[1566]: time="2023-10-02T19:14:57.901535273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:14:57.901855 env[1566]: time="2023-10-02T19:14:57.901827285Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:14:57.903182 env[1566]: time="2023-10-02T19:14:57.903138508Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:14:57.903388 env[1566]: time="2023-10-02T19:14:57.903356444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.906779 env[1566]: time="2023-10-02T19:14:57.906720075Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:14:57.907507 env[1566]: time="2023-10-02T19:14:57.907165455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.910093 env[1566]: time="2023-10-02T19:14:57.910033955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.910596 env[1566]: time="2023-10-02T19:14:57.910547840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.910985 systemd-hostnamed[1584]: Hostname set to (transient) Oct 2 19:14:57.911174 systemd-resolved[1508]: System hostname changed to 'ip-172-31-21-101'. Oct 2 19:14:57.912257 env[1566]: time="2023-10-02T19:14:57.912207278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.912420 env[1566]: time="2023-10-02T19:14:57.912382156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.912589 env[1566]: time="2023-10-02T19:14:57.912549968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.912765 env[1566]: time="2023-10-02T19:14:57.912725797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.912978 env[1566]: time="2023-10-02T19:14:57.912935306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.913221 env[1566]: time="2023-10-02T19:14:57.913175342Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:14:57.913700 env[1566]: time="2023-10-02T19:14:57.913658435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.913861 env[1566]: time="2023-10-02T19:14:57.913830351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.914036 env[1566]: time="2023-10-02T19:14:57.914006120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.916031 env[1566]: time="2023-10-02T19:14:57.915971185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:14:57.916240 env[1566]: time="2023-10-02T19:14:57.916200761Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:14:57.916616 env[1566]: time="2023-10-02T19:14:57.916581813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:14:57.916759 env[1566]: time="2023-10-02T19:14:57.916727994Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:14:57.916937 env[1566]: time="2023-10-02T19:14:57.916892496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:14:57.918335 env[1566]: time="2023-10-02T19:14:57.918217538Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:14:57.919654 env[1566]: time="2023-10-02T19:14:57.919049580Z" level=info msg="Connect containerd service" Oct 2 19:14:57.920052 env[1566]: time="2023-10-02T19:14:57.920009194Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:14:57.923283 env[1566]: time="2023-10-02T19:14:57.923225464Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:14:57.926874 coreos-metadata[1536]: Oct 02 19:14:57.926 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:14:57.928504 coreos-metadata[1536]: Oct 02 19:14:57.928 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:14:57.929481 coreos-metadata[1536]: Oct 02 19:14:57.929 INFO Fetch successful Oct 2 19:14:57.929626 coreos-metadata[1536]: Oct 02 19:14:57.929 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:14:57.931668 env[1566]: time="2023-10-02T19:14:57.931583142Z" level=info msg="Start subscribing containerd event" Oct 2 19:14:57.932867 coreos-metadata[1536]: Oct 02 19:14:57.932 INFO Fetch successful Oct 2 19:14:57.933983 env[1566]: time="2023-10-02T19:14:57.933900948Z" level=info msg="Start recovering state" Oct 2 19:14:57.935230 unknown[1536]: wrote ssh authorized keys file for user: core Oct 2 19:14:57.938977 env[1566]: time="2023-10-02T19:14:57.938902483Z" level=info msg="Start event monitor" Oct 2 19:14:57.943796 env[1566]: time="2023-10-02T19:14:57.943740996Z" level=info msg="Start snapshots syncer" Oct 2 19:14:57.944413 env[1566]: time="2023-10-02T19:14:57.944376999Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:14:57.947038 env[1566]: time="2023-10-02T19:14:57.938761261Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:14:57.947347 env[1566]: time="2023-10-02T19:14:57.947304987Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:14:57.947681 systemd[1]: Started containerd.service. Oct 2 19:14:57.950283 env[1566]: time="2023-10-02T19:14:57.950238983Z" level=info msg="containerd successfully booted in 0.287803s" Oct 2 19:14:57.952228 env[1566]: time="2023-10-02T19:14:57.952133185Z" level=info msg="Start streaming server" Oct 2 19:14:57.973833 update-ssh-keys[1644]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:14:57.975033 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:14:58.069682 tar[1560]: ./portmap Oct 2 19:14:58.077409 amazon-ssm-agent[1534]: 2023-10-02 19:14:58 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0ad405c7a2916f0a4 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0ad405c7a2916f0a4 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:14:58.077409 amazon-ssm-agent[1534]: status code: 400, request id: e9fac69f-5ec5-4117-95f2-f1c5aa565480 Oct 2 19:14:58.078097 amazon-ssm-agent[1534]: 2023-10-02 19:14:58 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:14:58.162077 tar[1560]: ./host-local Oct 2 19:14:58.247533 tar[1560]: ./vrf Oct 2 19:14:58.335954 tar[1560]: ./bridge Oct 2 19:14:58.459378 tar[1560]: ./tuning Oct 2 19:14:58.559703 tar[1560]: ./firewall Oct 2 19:14:58.670763 tar[1560]: ./host-device Oct 2 19:14:58.741138 tar[1560]: ./sbr Oct 2 19:14:58.790207 tar[1560]: ./loopback Oct 2 19:14:58.844145 systemd[1]: Finished prepare-critools.service. Oct 2 19:14:58.861694 tar[1560]: ./dhcp Oct 2 19:14:59.008685 tar[1560]: ./ptp Oct 2 19:14:59.071063 tar[1560]: ./ipvlan Oct 2 19:14:59.133304 tar[1560]: ./bandwidth Oct 2 19:14:59.219503 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:14:59.344890 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:15:01.262333 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:15:01.322527 systemd[1]: Finished sshd-keygen.service. Oct 2 19:15:01.327630 systemd[1]: Starting issuegen.service... Oct 2 19:15:01.347268 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:15:01.347628 systemd[1]: Finished issuegen.service. Oct 2 19:15:01.352748 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:15:01.375312 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:15:01.380762 systemd[1]: Started getty@tty1.service. Oct 2 19:15:01.386195 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:15:01.388751 systemd[1]: Reached target getty.target. Oct 2 19:15:01.390706 systemd[1]: Reached target multi-user.target. Oct 2 19:15:01.395376 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:15:01.418966 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:15:01.419323 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:15:01.421667 systemd[1]: Startup finished in 1.204s (kernel) + 11.841s (initrd) + 13.974s (userspace) = 27.020s. Oct 2 19:15:06.358652 systemd[1]: Created slice system-sshd.slice. Oct 2 19:15:06.361079 systemd[1]: Started sshd@0-172.31.21.101:22-139.178.89.65:48602.service. Oct 2 19:15:06.629783 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 48602 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:06.635057 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:06.652851 systemd[1]: Created slice user-500.slice. Oct 2 19:15:06.655324 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:15:06.665100 systemd-logind[1546]: New session 1 of user core. Oct 2 19:15:06.680943 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:15:06.685736 systemd[1]: Starting user@500.service... Oct 2 19:15:06.699637 (systemd)[1749]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:06.914190 systemd[1749]: Queued start job for default target default.target. Oct 2 19:15:06.916418 systemd[1749]: Reached target paths.target. Oct 2 19:15:06.916666 systemd[1749]: Reached target sockets.target. Oct 2 19:15:06.916809 systemd[1749]: Reached target timers.target. Oct 2 19:15:06.916980 systemd[1749]: Reached target basic.target. Oct 2 19:15:06.917191 systemd[1749]: Reached target default.target. Oct 2 19:15:06.917293 systemd[1]: Started user@500.service. Oct 2 19:15:06.918154 systemd[1749]: Startup finished in 198ms. Oct 2 19:15:06.919193 systemd[1]: Started session-1.scope. Oct 2 19:15:07.074409 systemd[1]: Started sshd@1-172.31.21.101:22-139.178.89.65:48606.service. Oct 2 19:15:07.260539 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 48606 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:07.263656 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:07.273141 systemd-logind[1546]: New session 2 of user core. Oct 2 19:15:07.273253 systemd[1]: Started session-2.scope. Oct 2 19:15:07.422042 sshd[1758]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:07.428737 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:15:07.430147 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:15:07.430496 systemd[1]: sshd@1-172.31.21.101:22-139.178.89.65:48606.service: Deactivated successfully. Oct 2 19:15:07.432547 systemd-logind[1546]: Removed session 2. Oct 2 19:15:07.455768 systemd[1]: Started sshd@2-172.31.21.101:22-139.178.89.65:48618.service. Oct 2 19:15:07.642068 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 48618 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:07.645468 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:07.655809 systemd[1]: Started session-3.scope. Oct 2 19:15:07.656742 systemd-logind[1546]: New session 3 of user core. Oct 2 19:15:07.790103 sshd[1764]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:07.797619 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:15:07.798213 systemd[1]: sshd@2-172.31.21.101:22-139.178.89.65:48618.service: Deactivated successfully. Oct 2 19:15:07.799450 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:15:07.800991 systemd-logind[1546]: Removed session 3. Oct 2 19:15:07.820975 systemd[1]: Started sshd@3-172.31.21.101:22-139.178.89.65:48630.service. Oct 2 19:15:08.009097 sshd[1770]: Accepted publickey for core from 139.178.89.65 port 48630 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:08.012550 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:08.022068 systemd[1]: Started session-4.scope. Oct 2 19:15:08.023142 systemd-logind[1546]: New session 4 of user core. Oct 2 19:15:08.170558 sshd[1770]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:08.176978 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:15:08.178146 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:15:08.178519 systemd[1]: sshd@3-172.31.21.101:22-139.178.89.65:48630.service: Deactivated successfully. Oct 2 19:15:08.180528 systemd-logind[1546]: Removed session 4. Oct 2 19:15:08.204639 systemd[1]: Started sshd@4-172.31.21.101:22-139.178.89.65:48646.service. Oct 2 19:15:08.392776 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 48646 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:08.396439 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:08.404778 systemd-logind[1546]: New session 5 of user core. Oct 2 19:15:08.405719 systemd[1]: Started session-5.scope. Oct 2 19:15:08.576543 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:15:08.577569 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:08.592177 dbus-daemon[1537]: avc: received setenforce notice (enforcing=1) Oct 2 19:15:08.595198 sudo[1779]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:08.620664 sshd[1776]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:08.628025 systemd[1]: sshd@4-172.31.21.101:22-139.178.89.65:48646.service: Deactivated successfully. Oct 2 19:15:08.629453 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:15:08.630616 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:15:08.632514 systemd-logind[1546]: Removed session 5. Oct 2 19:15:08.651457 systemd[1]: Started sshd@5-172.31.21.101:22-139.178.89.65:48654.service. Oct 2 19:15:08.841569 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 48654 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:08.845186 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:08.854317 systemd[1]: Started session-6.scope. Oct 2 19:15:08.855108 systemd-logind[1546]: New session 6 of user core. Oct 2 19:15:08.977345 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:15:08.978282 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:08.985750 sudo[1787]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:08.999574 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:15:09.000643 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:09.026129 systemd[1]: Stopping audit-rules.service... Oct 2 19:15:09.030000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:15:09.033045 kernel: kauditd_printk_skb: 74 callbacks suppressed Oct 2 19:15:09.033174 kernel: audit: type=1305 audit(1696274109.030:164): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:15:09.030000 audit[1790]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb2f4d60 a2=420 a3=0 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:09.049743 kernel: audit: type=1300 audit(1696274109.030:164): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb2f4d60 a2=420 a3=0 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:09.049862 auditctl[1790]: No rules Oct 2 19:15:09.030000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:15:09.053660 kernel: audit: type=1327 audit(1696274109.030:164): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:15:09.053781 kernel: audit: type=1131 audit(1696274109.049:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.050966 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:15:09.051305 systemd[1]: Stopped audit-rules.service. Oct 2 19:15:09.057029 systemd[1]: Starting audit-rules.service... Oct 2 19:15:09.121377 augenrules[1807]: No rules Oct 2 19:15:09.123111 systemd[1]: Finished audit-rules.service. Oct 2 19:15:09.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.131760 sudo[1786]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:09.130000 audit[1786]: USER_END pid=1786 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.141428 kernel: audit: type=1130 audit(1696274109.122:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.141531 kernel: audit: type=1106 audit(1696274109.130:167): pid=1786 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.131000 audit[1786]: CRED_DISP pid=1786 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.150949 kernel: audit: type=1104 audit(1696274109.131:168): pid=1786 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.156296 sshd[1783]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:09.157000 audit[1783]: USER_END pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.162722 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:15:09.164183 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:15:09.165413 systemd[1]: sshd@5-172.31.21.101:22-139.178.89.65:48654.service: Deactivated successfully. Oct 2 19:15:09.168450 systemd-logind[1546]: Removed session 6. Oct 2 19:15:09.157000 audit[1783]: CRED_DISP pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.184528 kernel: audit: type=1106 audit(1696274109.157:169): pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.184643 kernel: audit: type=1104 audit(1696274109.157:170): pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.184715 kernel: audit: type=1131 audit(1696274109.163:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.21.101:22-139.178.89.65:48654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.21.101:22-139.178.89.65:48654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.190413 systemd[1]: Started sshd@6-172.31.21.101:22-139.178.89.65:48670.service. Oct 2 19:15:09.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.21.101:22-139.178.89.65:48670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.369000 audit[1813]: USER_ACCT pid=1813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.372751 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 48670 ssh2: RSA SHA256:xq1jsPPMn3xJqYX9WbisZ9n0n6wOxmd44nRnO32wqqo Oct 2 19:15:09.372000 audit[1813]: CRED_ACQ pid=1813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.372000 audit[1813]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea57e600 a2=3 a3=1 items=0 ppid=1 pid=1813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:09.372000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:15:09.374512 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:15:09.382496 systemd-logind[1546]: New session 7 of user core. Oct 2 19:15:09.383624 systemd[1]: Started session-7.scope. Oct 2 19:15:09.392000 audit[1813]: USER_START pid=1813 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.399000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:09.501000 audit[1816]: USER_ACCT pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.503284 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:15:09.503000 audit[1816]: CRED_REFR pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:09.504410 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:15:09.507000 audit[1816]: USER_START pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:10.180363 systemd[1]: Reloading. Oct 2 19:15:10.388115 /usr/lib/systemd/system-generators/torcx-generator[1845]: time="2023-10-02T19:15:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:10.391060 /usr/lib/systemd/system-generators/torcx-generator[1845]: time="2023-10-02T19:15:10Z" level=info msg="torcx already run" Oct 2 19:15:10.601888 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:10.602565 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:10.646202 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:10.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit: BPF prog-id=37 op=LOAD Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.794000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.795000 audit: BPF prog-id=38 op=LOAD Oct 2 19:15:10.795000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:15:10.795000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:15:10.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.799000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.800000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit: BPF prog-id=39 op=LOAD Oct 2 19:15:10.801000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.801000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit: BPF prog-id=40 op=LOAD Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.803000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.803000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.803000 audit: BPF prog-id=41 op=LOAD Oct 2 19:15:10.803000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:15:10.803000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.806000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.807000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.807000 audit: BPF prog-id=42 op=LOAD Oct 2 19:15:10.807000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.810000 audit: BPF prog-id=43 op=LOAD Oct 2 19:15:10.810000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:15:10.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.811000 audit: BPF prog-id=44 op=LOAD Oct 2 19:15:10.811000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.812000 audit: BPF prog-id=45 op=LOAD Oct 2 19:15:10.813000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:15:10.813000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.816000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit: BPF prog-id=46 op=LOAD Oct 2 19:15:10.817000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit: BPF prog-id=47 op=LOAD Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.817000 audit: BPF prog-id=48 op=LOAD Oct 2 19:15:10.817000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:15:10.817000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.818000 audit: BPF prog-id=49 op=LOAD Oct 2 19:15:10.818000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit: BPF prog-id=50 op=LOAD Oct 2 19:15:10.821000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit: BPF prog-id=51 op=LOAD Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.822000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.822000 audit: BPF prog-id=52 op=LOAD Oct 2 19:15:10.822000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:15:10.822000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.825000 audit: BPF prog-id=53 op=LOAD Oct 2 19:15:10.825000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.826000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.827000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:10.827000 audit: BPF prog-id=54 op=LOAD Oct 2 19:15:10.827000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:15:10.846104 systemd[1]: Started kubelet.service. Oct 2 19:15:10.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:10.884981 systemd[1]: Starting coreos-metadata.service... Oct 2 19:15:11.036561 kubelet[1900]: E1002 19:15:11.036459 1900 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:15:11.040256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:15:11.040580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:15:11.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:15:11.081507 coreos-metadata[1908]: Oct 02 19:15:11.081 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:15:11.082772 coreos-metadata[1908]: Oct 02 19:15:11.082 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:15:11.083471 coreos-metadata[1908]: Oct 02 19:15:11.083 INFO Fetch successful Oct 2 19:15:11.083471 coreos-metadata[1908]: Oct 02 19:15:11.083 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:15:11.084143 coreos-metadata[1908]: Oct 02 19:15:11.084 INFO Fetch successful Oct 2 19:15:11.084143 coreos-metadata[1908]: Oct 02 19:15:11.084 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:15:11.084835 coreos-metadata[1908]: Oct 02 19:15:11.084 INFO Fetch successful Oct 2 19:15:11.084835 coreos-metadata[1908]: Oct 02 19:15:11.084 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:15:11.085494 coreos-metadata[1908]: Oct 02 19:15:11.085 INFO Fetch successful Oct 2 19:15:11.085494 coreos-metadata[1908]: Oct 02 19:15:11.085 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:15:11.086210 coreos-metadata[1908]: Oct 02 19:15:11.086 INFO Fetch successful Oct 2 19:15:11.086210 coreos-metadata[1908]: Oct 02 19:15:11.086 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:15:11.086855 coreos-metadata[1908]: Oct 02 19:15:11.086 INFO Fetch successful Oct 2 19:15:11.086855 coreos-metadata[1908]: Oct 02 19:15:11.086 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:15:11.087541 coreos-metadata[1908]: Oct 02 19:15:11.087 INFO Fetch successful Oct 2 19:15:11.087541 coreos-metadata[1908]: Oct 02 19:15:11.087 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:15:11.088093 coreos-metadata[1908]: Oct 02 19:15:11.088 INFO Fetch successful Oct 2 19:15:11.109920 systemd[1]: Finished coreos-metadata.service. Oct 2 19:15:11.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.825678 systemd[1]: Stopped kubelet.service. Oct 2 19:15:11.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:11.870772 systemd[1]: Reloading. Oct 2 19:15:12.086718 /usr/lib/systemd/system-generators/torcx-generator[1966]: time="2023-10-02T19:15:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:15:12.086780 /usr/lib/systemd/system-generators/torcx-generator[1966]: time="2023-10-02T19:15:12Z" level=info msg="torcx already run" Oct 2 19:15:12.320091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:15:12.320334 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:15:12.363778 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit: BPF prog-id=55 op=LOAD Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.518000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.519000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.519000 audit: BPF prog-id=56 op=LOAD Oct 2 19:15:12.519000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:15:12.519000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:15:12.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.523000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.524000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.524000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.524000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.524000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.524000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.524000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit: BPF prog-id=57 op=LOAD Oct 2 19:15:12.525000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.525000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit: BPF prog-id=58 op=LOAD Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.527000 audit: BPF prog-id=59 op=LOAD Oct 2 19:15:12.527000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:15:12.527000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.531000 audit: BPF prog-id=60 op=LOAD Oct 2 19:15:12.531000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.534000 audit: BPF prog-id=61 op=LOAD Oct 2 19:15:12.534000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.535000 audit: BPF prog-id=62 op=LOAD Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.536000 audit: BPF prog-id=63 op=LOAD Oct 2 19:15:12.537000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:15:12.537000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:15:12.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.540000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.541000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.541000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.541000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.541000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.541000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.541000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.542000 audit: BPF prog-id=64 op=LOAD Oct 2 19:15:12.542000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:15:12.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit: BPF prog-id=65 op=LOAD Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.543000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.544000 audit: BPF prog-id=66 op=LOAD Oct 2 19:15:12.544000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:15:12.545000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.546000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.546000 audit: BPF prog-id=67 op=LOAD Oct 2 19:15:12.546000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit: BPF prog-id=68 op=LOAD Oct 2 19:15:12.549000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.549000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit: BPF prog-id=69 op=LOAD Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.550000 audit: BPF prog-id=70 op=LOAD Oct 2 19:15:12.550000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:15:12.550000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.553000 audit: BPF prog-id=71 op=LOAD Oct 2 19:15:12.553000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.554000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.555000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.555000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.555000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:12.555000 audit: BPF prog-id=72 op=LOAD Oct 2 19:15:12.555000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:15:12.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:12.592390 systemd[1]: Started kubelet.service. Oct 2 19:15:12.725497 kubelet[2020]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:15:12.725497 kubelet[2020]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:15:12.725497 kubelet[2020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:15:12.731675 kubelet[2020]: I1002 19:15:12.731586 2020 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:15:12.734118 kubelet[2020]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:15:12.734118 kubelet[2020]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:15:12.734118 kubelet[2020]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:15:13.643989 kubelet[2020]: I1002 19:15:13.643941 2020 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:15:13.644294 kubelet[2020]: I1002 19:15:13.644272 2020 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:15:13.644745 kubelet[2020]: I1002 19:15:13.644720 2020 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:15:13.654954 kubelet[2020]: I1002 19:15:13.654881 2020 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:15:13.658072 kubelet[2020]: W1002 19:15:13.658039 2020 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:15:13.659353 kubelet[2020]: I1002 19:15:13.659322 2020 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:15:13.659967 kubelet[2020]: I1002 19:15:13.659943 2020 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:15:13.660209 kubelet[2020]: I1002 19:15:13.660187 2020 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:15:13.660550 kubelet[2020]: I1002 19:15:13.660528 2020 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:15:13.660662 kubelet[2020]: I1002 19:15:13.660643 2020 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:15:13.660937 kubelet[2020]: I1002 19:15:13.660885 2020 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:15:13.668778 kubelet[2020]: I1002 19:15:13.668742 2020 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:15:13.669059 kubelet[2020]: I1002 19:15:13.669034 2020 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:15:13.669268 kubelet[2020]: I1002 19:15:13.669224 2020 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:15:13.669427 kubelet[2020]: I1002 19:15:13.669402 2020 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:15:13.670326 kubelet[2020]: E1002 19:15:13.670287 2020 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:13.670713 kubelet[2020]: E1002 19:15:13.670675 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:13.673039 kubelet[2020]: I1002 19:15:13.672991 2020 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:15:13.674055 kubelet[2020]: W1002 19:15:13.674017 2020 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:15:13.675213 kubelet[2020]: I1002 19:15:13.675168 2020 server.go:1175] "Started kubelet" Oct 2 19:15:13.675807 kubelet[2020]: I1002 19:15:13.675777 2020 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:15:13.677272 kubelet[2020]: I1002 19:15:13.677237 2020 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:15:13.680000 audit[2020]: AVC avc: denied { mac_admin } for pid=2020 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.680000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:13.680000 audit[2020]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ab8900 a1=4000ae25a0 a2=4000ab88d0 a3=25 items=0 ppid=1 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.680000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:13.680000 audit[2020]: AVC avc: denied { mac_admin } for pid=2020 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.680000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:13.680000 audit[2020]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400086bbe0 a1=4000ae25b8 a2=4000ab8990 a3=25 items=0 ppid=1 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.680000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:13.681842 kubelet[2020]: I1002 19:15:13.681047 2020 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:15:13.681842 kubelet[2020]: I1002 19:15:13.681127 2020 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:15:13.681842 kubelet[2020]: I1002 19:15:13.681372 2020 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:15:13.684592 kubelet[2020]: E1002 19:15:13.684545 2020 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:15:13.684827 kubelet[2020]: E1002 19:15:13.684794 2020 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:15:13.686770 kubelet[2020]: E1002 19:15:13.686601 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b34bf3d5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 675129813, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 675129813, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.689937 kubelet[2020]: I1002 19:15:13.689830 2020 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:15:13.691072 kubelet[2020]: I1002 19:15:13.690988 2020 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:15:13.693858 kubelet[2020]: E1002 19:15:13.693817 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:13.706886 kubelet[2020]: W1002 19:15:13.706846 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:13.710039 kubelet[2020]: E1002 19:15:13.710003 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:13.710327 kubelet[2020]: W1002 19:15:13.707235 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:13.710471 kubelet[2020]: E1002 19:15:13.710447 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:13.710574 kubelet[2020]: W1002 19:15:13.707282 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:13.710711 kubelet[2020]: E1002 19:15:13.710690 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:13.710810 kubelet[2020]: E1002 19:15:13.707325 2020 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.21.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:13.710938 kubelet[2020]: E1002 19:15:13.707413 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b3df09de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 684769246, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 684769246, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.760785 kubelet[2020]: I1002 19:15:13.760732 2020 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:15:13.760785 kubelet[2020]: I1002 19:15:13.760774 2020 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:15:13.761472 kubelet[2020]: I1002 19:15:13.760809 2020 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:15:13.761821 kubelet[2020]: E1002 19:15:13.761427 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.764371 kubelet[2020]: E1002 19:15:13.763480 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.764000 audit[2039]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.764000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe6a127a0 a2=0 a3=1 items=0 ppid=2020 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:15:13.765981 kubelet[2020]: E1002 19:15:13.765633 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.766298 kubelet[2020]: I1002 19:15:13.766260 2020 policy_none.go:49] "None policy: Start" Oct 2 19:15:13.767502 kubelet[2020]: I1002 19:15:13.767448 2020 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:15:13.767502 kubelet[2020]: I1002 19:15:13.767498 2020 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:15:13.769000 audit[2041]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.769000 audit[2041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffec2a2ee0 a2=0 a3=1 items=0 ppid=2020 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:15:13.776465 systemd[1]: Created slice kubepods.slice. Oct 2 19:15:13.796175 kubelet[2020]: E1002 19:15:13.796137 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:13.797775 kubelet[2020]: I1002 19:15:13.797744 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:13.800863 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:15:13.802671 kubelet[2020]: E1002 19:15:13.802614 2020 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.101" Oct 2 19:15:13.803774 kubelet[2020]: E1002 19:15:13.803573 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 797687234, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b846ee38" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.806735 kubelet[2020]: E1002 19:15:13.806278 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 797699740, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b84718c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.809145 kubelet[2020]: E1002 19:15:13.808859 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 797705903, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b8472fc7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.811251 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:15:13.783000 audit[2043]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.783000 audit[2043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffca861ef0 a2=0 a3=1 items=0 ppid=2020 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:13.822991 kubelet[2020]: I1002 19:15:13.822957 2020 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:15:13.822000 audit[2020]: AVC avc: denied { mac_admin } for pid=2020 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:13.822000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:15:13.822000 audit[2020]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ba1140 a1=40008fa288 a2=4000ba1110 a3=25 items=0 ppid=1 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.822000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:15:13.824793 kubelet[2020]: I1002 19:15:13.824738 2020 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:15:13.826614 kubelet[2020]: E1002 19:15:13.826368 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051bc31da43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 824414275, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 824414275, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:13.828504 kubelet[2020]: I1002 19:15:13.828472 2020 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:15:13.829774 kubelet[2020]: E1002 19:15:13.829741 2020 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.101\" not found" Oct 2 19:15:13.830000 audit[2048]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.830000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdd15b470 a2=0 a3=1 items=0 ppid=2020 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:15:13.892000 audit[2054]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.892000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc368e610 a2=0 a3=1 items=0 ppid=2020 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:15:13.898389 kubelet[2020]: E1002 19:15:13.898356 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:13.899000 audit[2055]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.899000 audit[2055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc0d841f0 a2=0 a3=1 items=0 ppid=2020 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:15:13.912455 kubelet[2020]: E1002 19:15:13.912384 2020 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.21.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:13.914000 audit[2058]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.914000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffebcbb180 a2=0 a3=1 items=0 ppid=2020 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:15:13.927000 audit[2061]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.927000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffff8abcd0 a2=0 a3=1 items=0 ppid=2020 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:15:13.931000 audit[2062]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.931000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4d60850 a2=0 a3=1 items=0 ppid=2020 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:15:13.935000 audit[2063]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.935000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc76be930 a2=0 a3=1 items=0 ppid=2020 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:15:13.944000 audit[2065]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.944000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffc11e060 a2=0 a3=1 items=0 ppid=2020 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:15:13.952000 audit[2067]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.952000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffda8d8c50 a2=0 a3=1 items=0 ppid=2020 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:13.992000 audit[2070]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:13.992000 audit[2070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffffd98ed60 a2=0 a3=1 items=0 ppid=2020 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:13.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:15:13.998704 kubelet[2020]: E1002 19:15:13.998659 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.000000 audit[2072]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.000000 audit[2072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffee58f2f0 a2=0 a3=1 items=0 ppid=2020 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:15:14.004023 kubelet[2020]: I1002 19:15:14.003977 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:14.006559 kubelet[2020]: E1002 19:15:14.006511 2020 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.101" Oct 2 19:15:14.006996 kubelet[2020]: E1002 19:15:14.006824 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 14, 3879391, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b846ee38" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:14.008621 kubelet[2020]: E1002 19:15:14.008460 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 14, 3934468, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b84718c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:14.022000 audit[2075]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.022000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffefa136f0 a2=0 a3=1 items=0 ppid=2020 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:15:14.025044 kubelet[2020]: I1002 19:15:14.025005 2020 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:15:14.026000 audit[2076]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.026000 audit[2076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd35600b0 a2=0 a3=1 items=0 ppid=2020 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.026000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:15:14.026000 audit[2077]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.026000 audit[2077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffd3aa920 a2=0 a3=1 items=0 ppid=2020 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:15:14.030000 audit[2078]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.035112 kernel: kauditd_printk_skb: 492 callbacks suppressed Oct 2 19:15:14.035263 kernel: audit: type=1325 audit(1696274114.030:619): table=nat:19 family=10 entries=2 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.030000 audit[2078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff7cf6ea0 a2=0 a3=1 items=0 ppid=2020 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.052501 kernel: audit: type=1300 audit(1696274114.030:619): arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff7cf6ea0 a2=0 a3=1 items=0 ppid=2020 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.052945 kernel: audit: type=1327 audit(1696274114.030:619): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:15:14.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:15:14.032000 audit[2079]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.064257 kernel: audit: type=1325 audit(1696274114.032:620): table=nat:20 family=2 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.032000 audit[2079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff18e6a80 a2=0 a3=1 items=0 ppid=2020 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.075974 kernel: audit: type=1300 audit(1696274114.032:620): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff18e6a80 a2=0 a3=1 items=0 ppid=2020 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.082456 kernel: audit: type=1327 audit(1696274114.032:620): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:14.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:14.083177 kubelet[2020]: E1002 19:15:14.083055 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 14, 3939934, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b8472fc7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:14.037000 audit[2080]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.089593 kernel: audit: type=1325 audit(1696274114.037:621): table=filter:21 family=2 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:14.037000 audit[2080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffcd81530 a2=0 a3=1 items=0 ppid=2020 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.101811 kernel: audit: type=1300 audit(1696274114.037:621): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffcd81530 a2=0 a3=1 items=0 ppid=2020 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.102112 kubelet[2020]: E1002 19:15:14.102083 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:14.108240 kernel: audit: type=1327 audit(1696274114.037:621): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:14.054000 audit[2082]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.114034 kernel: audit: type=1325 audit(1696274114.054:622): table=nat:22 family=10 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.054000 audit[2082]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffca558b00 a2=0 a3=1 items=0 ppid=2020 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:15:14.060000 audit[2083]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.060000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffc6317d0 a2=0 a3=1 items=0 ppid=2020 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:15:14.074000 audit[2085]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.074000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe55fb2f0 a2=0 a3=1 items=0 ppid=2020 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:15:14.081000 audit[2086]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.081000 audit[2086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3f982d0 a2=0 a3=1 items=0 ppid=2020 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:15:14.081000 audit[2087]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.081000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc9275f0 a2=0 a3=1 items=0 ppid=2020 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:15:14.104000 audit[2089]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.104000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcc2ec670 a2=0 a3=1 items=0 ppid=2020 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:15:14.116000 audit[2091]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.116000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc53ee4d0 a2=0 a3=1 items=0 ppid=2020 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:15:14.125000 audit[2093]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.125000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffffa6de8c0 a2=0 a3=1 items=0 ppid=2020 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:15:14.133000 audit[2095]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.133000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff149d320 a2=0 a3=1 items=0 ppid=2020 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:15:14.145000 audit[2097]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.145000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffc408d0e0 a2=0 a3=1 items=0 ppid=2020 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:15:14.147607 kubelet[2020]: I1002 19:15:14.147556 2020 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:15:14.147719 kubelet[2020]: I1002 19:15:14.147662 2020 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:15:14.147719 kubelet[2020]: I1002 19:15:14.147700 2020 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:15:14.147826 kubelet[2020]: E1002 19:15:14.147785 2020 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:15:14.154217 kubelet[2020]: W1002 19:15:14.149652 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:14.154217 kubelet[2020]: E1002 19:15:14.149704 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:14.153000 audit[2098]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.153000 audit[2098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2f7c000 a2=0 a3=1 items=0 ppid=2020 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:15:14.157000 audit[2099]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.157000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5eb6c60 a2=0 a3=1 items=0 ppid=2020 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:15:14.160000 audit[2100]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:14.160000 audit[2100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6040070 a2=0 a3=1 items=0 ppid=2020 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:14.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:15:14.202583 kubelet[2020]: E1002 19:15:14.202517 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.303429 kubelet[2020]: E1002 19:15:14.303368 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.314341 kubelet[2020]: E1002 19:15:14.314277 2020 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.21.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:14.403687 kubelet[2020]: E1002 19:15:14.403646 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.409460 kubelet[2020]: I1002 19:15:14.408527 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:14.411023 kubelet[2020]: E1002 19:15:14.410950 2020 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.101" Oct 2 19:15:14.411249 kubelet[2020]: E1002 19:15:14.411131 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 14, 408482820, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b846ee38" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:14.478298 kubelet[2020]: E1002 19:15:14.478176 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 14, 408490352, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b84718c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:14.504615 kubelet[2020]: E1002 19:15:14.504579 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.597800 kubelet[2020]: W1002 19:15:14.597741 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:14.597800 kubelet[2020]: E1002 19:15:14.597796 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:14.604855 kubelet[2020]: E1002 19:15:14.604825 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.672033 kubelet[2020]: E1002 19:15:14.671312 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:14.677995 kubelet[2020]: E1002 19:15:14.677820 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 14, 408495013, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b8472fc7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:14.705279 kubelet[2020]: E1002 19:15:14.705236 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.806302 kubelet[2020]: E1002 19:15:14.806244 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:14.907006 kubelet[2020]: E1002 19:15:14.906943 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.007788 kubelet[2020]: E1002 19:15:15.007311 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.106221 kubelet[2020]: W1002 19:15:15.106176 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:15.106221 kubelet[2020]: E1002 19:15:15.106227 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:15.108277 kubelet[2020]: E1002 19:15:15.108245 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.115639 kubelet[2020]: E1002 19:15:15.115595 2020 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.21.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:15.209329 kubelet[2020]: E1002 19:15:15.209289 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.212643 kubelet[2020]: I1002 19:15:15.212617 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:15.214338 kubelet[2020]: E1002 19:15:15.214262 2020 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.101" Oct 2 19:15:15.214603 kubelet[2020]: E1002 19:15:15.214245 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 15, 212469836, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b846ee38" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:15.216792 kubelet[2020]: E1002 19:15:15.216624 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 15, 212495469, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b84718c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:15.279230 kubelet[2020]: E1002 19:15:15.278265 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 15, 212560452, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b8472fc7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:15.303473 kubelet[2020]: W1002 19:15:15.303403 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:15.303473 kubelet[2020]: E1002 19:15:15.303479 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:15.310650 kubelet[2020]: E1002 19:15:15.310605 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.411255 kubelet[2020]: E1002 19:15:15.411194 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.464413 kubelet[2020]: W1002 19:15:15.464365 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:15.464569 kubelet[2020]: E1002 19:15:15.464444 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:15.511928 kubelet[2020]: E1002 19:15:15.511871 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.612593 kubelet[2020]: E1002 19:15:15.612443 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.672038 kubelet[2020]: E1002 19:15:15.671977 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:15.713150 kubelet[2020]: E1002 19:15:15.713091 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.813795 kubelet[2020]: E1002 19:15:15.813739 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:15.914836 kubelet[2020]: E1002 19:15:15.914688 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.015434 kubelet[2020]: E1002 19:15:16.015370 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.116109 kubelet[2020]: E1002 19:15:16.116049 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.216648 kubelet[2020]: E1002 19:15:16.216505 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.317193 kubelet[2020]: E1002 19:15:16.317129 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.417842 kubelet[2020]: E1002 19:15:16.417787 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.518603 kubelet[2020]: E1002 19:15:16.518473 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.619113 kubelet[2020]: E1002 19:15:16.619050 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.672595 kubelet[2020]: E1002 19:15:16.672542 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:16.717992 kubelet[2020]: E1002 19:15:16.717928 2020 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.21.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:16.719183 kubelet[2020]: E1002 19:15:16.719139 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.815784 kubelet[2020]: I1002 19:15:16.815646 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:16.817521 kubelet[2020]: E1002 19:15:16.817454 2020 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.101" Oct 2 19:15:16.818306 kubelet[2020]: E1002 19:15:16.818177 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 815567473, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b846ee38" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.819312 kubelet[2020]: E1002 19:15:16.819251 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:16.819838 kubelet[2020]: E1002 19:15:16.819723 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 815575941, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b84718c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.821434 kubelet[2020]: E1002 19:15:16.821307 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 16, 815584553, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b8472fc7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:16.920163 kubelet[2020]: E1002 19:15:16.920108 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.020719 kubelet[2020]: E1002 19:15:17.020676 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.121339 kubelet[2020]: E1002 19:15:17.121187 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.200176 kubelet[2020]: W1002 19:15:17.200127 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:17.200176 kubelet[2020]: E1002 19:15:17.200183 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:17.221363 kubelet[2020]: E1002 19:15:17.221300 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.321943 kubelet[2020]: E1002 19:15:17.321866 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.422637 kubelet[2020]: E1002 19:15:17.422456 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.523211 kubelet[2020]: E1002 19:15:17.523160 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.623924 kubelet[2020]: E1002 19:15:17.623849 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.673401 kubelet[2020]: E1002 19:15:17.673285 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:17.724509 kubelet[2020]: E1002 19:15:17.724450 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.769298 kubelet[2020]: W1002 19:15:17.769257 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:17.769500 kubelet[2020]: E1002 19:15:17.769478 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:17.824729 kubelet[2020]: E1002 19:15:17.824687 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:17.910790 kubelet[2020]: W1002 19:15:17.910727 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:17.910790 kubelet[2020]: E1002 19:15:17.910784 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:15:17.926366 kubelet[2020]: E1002 19:15:17.926242 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.027001 kubelet[2020]: E1002 19:15:18.026934 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.127763 kubelet[2020]: E1002 19:15:18.127699 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.228357 kubelet[2020]: E1002 19:15:18.228207 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.235939 kubelet[2020]: W1002 19:15:18.235870 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:18.236115 kubelet[2020]: E1002 19:15:18.235948 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:18.328663 kubelet[2020]: E1002 19:15:18.328603 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.429395 kubelet[2020]: E1002 19:15:18.429338 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.530170 kubelet[2020]: E1002 19:15:18.530036 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.630848 kubelet[2020]: E1002 19:15:18.630786 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.674251 kubelet[2020]: E1002 19:15:18.674182 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:18.731735 kubelet[2020]: E1002 19:15:18.731691 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.830149 kubelet[2020]: E1002 19:15:18.829998 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:18.832132 kubelet[2020]: E1002 19:15:18.832083 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:18.932880 kubelet[2020]: E1002 19:15:18.932839 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.033587 kubelet[2020]: E1002 19:15:19.033539 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.134391 kubelet[2020]: E1002 19:15:19.134261 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.235009 kubelet[2020]: E1002 19:15:19.234953 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.335786 kubelet[2020]: E1002 19:15:19.335741 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.436688 kubelet[2020]: E1002 19:15:19.436526 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.537237 kubelet[2020]: E1002 19:15:19.537171 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.637921 kubelet[2020]: E1002 19:15:19.637843 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.675330 kubelet[2020]: E1002 19:15:19.675277 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:19.738670 kubelet[2020]: E1002 19:15:19.738524 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.839530 kubelet[2020]: E1002 19:15:19.839468 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:19.920586 kubelet[2020]: E1002 19:15:19.920500 2020 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.21.101" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:15:19.939831 kubelet[2020]: E1002 19:15:19.939780 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.019198 kubelet[2020]: I1002 19:15:20.019083 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:20.022337 kubelet[2020]: E1002 19:15:20.022288 2020 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.101" Oct 2 19:15:20.022503 kubelet[2020]: E1002 19:15:20.022382 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b846ee38", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.101 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758686776, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 19041503, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b846ee38" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.024359 kubelet[2020]: E1002 19:15:20.024211 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b84718c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.101 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758697672, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 19048708, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b84718c8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.026178 kubelet[2020]: E1002 19:15:20.026050 2020 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.101.178a6051b8472fc7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.101", UID:"172.31.21.101", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.101 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.101"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 15, 13, 758703559, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 15, 20, 19053571, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.101.178a6051b8472fc7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:15:20.040529 kubelet[2020]: E1002 19:15:20.040468 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.141188 kubelet[2020]: E1002 19:15:20.141142 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.241643 kubelet[2020]: E1002 19:15:20.241598 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.342377 kubelet[2020]: E1002 19:15:20.342254 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.442944 kubelet[2020]: E1002 19:15:20.442879 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.543634 kubelet[2020]: E1002 19:15:20.543596 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.644562 kubelet[2020]: E1002 19:15:20.644415 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.675849 kubelet[2020]: E1002 19:15:20.675813 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:20.745503 kubelet[2020]: E1002 19:15:20.745437 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.846455 kubelet[2020]: E1002 19:15:20.846397 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:20.947203 kubelet[2020]: E1002 19:15:20.947061 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.047833 kubelet[2020]: E1002 19:15:21.047764 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.148601 kubelet[2020]: E1002 19:15:21.148543 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.248973 kubelet[2020]: E1002 19:15:21.248805 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.252795 kubelet[2020]: W1002 19:15:21.252748 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:21.252795 kubelet[2020]: E1002 19:15:21.252803 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:15:21.349519 kubelet[2020]: E1002 19:15:21.349474 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.450178 kubelet[2020]: E1002 19:15:21.450133 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.550886 kubelet[2020]: E1002 19:15:21.550767 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.651472 kubelet[2020]: E1002 19:15:21.651427 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.677002 kubelet[2020]: E1002 19:15:21.676962 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:21.752393 kubelet[2020]: E1002 19:15:21.752342 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.853502 kubelet[2020]: E1002 19:15:21.853383 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:21.954465 kubelet[2020]: E1002 19:15:21.954400 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.055226 kubelet[2020]: E1002 19:15:22.055145 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.156342 kubelet[2020]: E1002 19:15:22.156194 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.257340 kubelet[2020]: E1002 19:15:22.257274 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.358025 kubelet[2020]: E1002 19:15:22.357964 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.458838 kubelet[2020]: E1002 19:15:22.458690 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.558775 kubelet[2020]: W1002 19:15:22.558705 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:22.558775 kubelet[2020]: E1002 19:15:22.558765 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:15:22.559052 kubelet[2020]: E1002 19:15:22.558801 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.659603 kubelet[2020]: E1002 19:15:22.659538 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.677894 kubelet[2020]: E1002 19:15:22.677849 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:22.760319 kubelet[2020]: E1002 19:15:22.760174 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.861216 kubelet[2020]: E1002 19:15:22.861153 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:22.961924 kubelet[2020]: E1002 19:15:22.961864 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.062757 kubelet[2020]: E1002 19:15:23.062597 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.089400 kubelet[2020]: W1002 19:15:23.089338 2020 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:23.089400 kubelet[2020]: E1002 19:15:23.089397 2020 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:15:23.163318 kubelet[2020]: E1002 19:15:23.163255 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.264419 kubelet[2020]: E1002 19:15:23.264352 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.365294 kubelet[2020]: E1002 19:15:23.365145 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.465959 kubelet[2020]: E1002 19:15:23.465881 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.566687 kubelet[2020]: E1002 19:15:23.566629 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.653433 kubelet[2020]: I1002 19:15:23.653136 2020 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:15:23.667832 kubelet[2020]: E1002 19:15:23.667775 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.678016 kubelet[2020]: E1002 19:15:23.677956 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.768161 kubelet[2020]: E1002 19:15:23.768117 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.830933 kubelet[2020]: E1002 19:15:23.830856 2020 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.101\" not found" Oct 2 19:15:23.831599 kubelet[2020]: E1002 19:15:23.831562 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:23.869234 kubelet[2020]: E1002 19:15:23.869170 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:23.970996 kubelet[2020]: E1002 19:15:23.970813 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.071643 kubelet[2020]: E1002 19:15:24.071571 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.098864 kubelet[2020]: E1002 19:15:24.098810 2020 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.21.101" not found Oct 2 19:15:24.171783 kubelet[2020]: E1002 19:15:24.171752 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.272778 kubelet[2020]: E1002 19:15:24.272616 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.373507 kubelet[2020]: E1002 19:15:24.373458 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.474221 kubelet[2020]: E1002 19:15:24.474155 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.575375 kubelet[2020]: E1002 19:15:24.575225 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.676271 kubelet[2020]: E1002 19:15:24.676225 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.678465 kubelet[2020]: E1002 19:15:24.678428 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:24.776933 kubelet[2020]: E1002 19:15:24.776852 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.877956 kubelet[2020]: E1002 19:15:24.877805 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:24.978761 kubelet[2020]: E1002 19:15:24.978689 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.079498 kubelet[2020]: E1002 19:15:25.079454 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.180120 kubelet[2020]: E1002 19:15:25.180002 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.281039 kubelet[2020]: E1002 19:15:25.280995 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.312373 kubelet[2020]: E1002 19:15:25.312333 2020 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.21.101" not found Oct 2 19:15:25.381997 kubelet[2020]: E1002 19:15:25.381957 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.483200 kubelet[2020]: E1002 19:15:25.483071 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.583747 kubelet[2020]: E1002 19:15:25.583702 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.678744 kubelet[2020]: E1002 19:15:25.678701 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:25.685029 kubelet[2020]: E1002 19:15:25.684980 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.785801 kubelet[2020]: E1002 19:15:25.785647 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.886749 kubelet[2020]: E1002 19:15:25.886675 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:25.987364 kubelet[2020]: E1002 19:15:25.987322 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.088083 kubelet[2020]: E1002 19:15:26.087952 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.188382 kubelet[2020]: E1002 19:15:26.188343 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.289071 kubelet[2020]: E1002 19:15:26.289028 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.328269 kubelet[2020]: E1002 19:15:26.328224 2020 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.21.101\" not found" node="172.31.21.101" Oct 2 19:15:26.389488 kubelet[2020]: E1002 19:15:26.389347 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.423571 kubelet[2020]: I1002 19:15:26.423538 2020 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.101" Oct 2 19:15:26.490098 kubelet[2020]: E1002 19:15:26.490053 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.591107 kubelet[2020]: E1002 19:15:26.591021 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.679780 kubelet[2020]: E1002 19:15:26.679643 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.691893 kubelet[2020]: E1002 19:15:26.691855 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.716114 kubelet[2020]: I1002 19:15:26.716051 2020 kubelet_node_status.go:73] "Successfully registered node" node="172.31.21.101" Oct 2 19:15:26.792363 kubelet[2020]: E1002 19:15:26.792279 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.893192 kubelet[2020]: E1002 19:15:26.893123 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:26.993618 kubelet[2020]: E1002 19:15:26.993460 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.094166 kubelet[2020]: E1002 19:15:27.094097 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.123474 sudo[1816]: pam_unix(sudo:session): session closed for user root Oct 2 19:15:27.126405 kernel: kauditd_printk_skb: 38 callbacks suppressed Oct 2 19:15:27.126464 kernel: audit: type=1106 audit(1696274127.122:635): pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.122000 audit[1816]: USER_END pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.122000 audit[1816]: CRED_DISP pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.143445 kernel: audit: type=1104 audit(1696274127.122:636): pid=1816 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.156269 sshd[1813]: pam_unix(sshd:session): session closed for user core Oct 2 19:15:27.157000 audit[1813]: USER_END pid=1813 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:27.157000 audit[1813]: CRED_DISP pid=1813 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:27.182106 kernel: audit: type=1106 audit(1696274127.157:637): pid=1813 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:27.182219 kernel: audit: type=1104 audit(1696274127.157:638): pid=1813 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:15:27.182232 systemd[1]: sshd@6-172.31.21.101:22-139.178.89.65:48670.service: Deactivated successfully. Oct 2 19:15:27.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.21.101:22-139.178.89.65:48670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.183634 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:15:27.193970 kernel: audit: type=1131 audit(1696274127.181:639): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.21.101:22-139.178.89.65:48670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.194236 kubelet[2020]: E1002 19:15:27.194203 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.194887 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:15:27.197056 systemd-logind[1546]: Removed session 7. Oct 2 19:15:27.294785 kubelet[2020]: E1002 19:15:27.294644 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.395096 kubelet[2020]: E1002 19:15:27.395045 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.496048 kubelet[2020]: E1002 19:15:27.495993 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.597091 kubelet[2020]: E1002 19:15:27.596957 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.680653 kubelet[2020]: E1002 19:15:27.680579 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:27.697547 kubelet[2020]: E1002 19:15:27.697493 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.798430 kubelet[2020]: E1002 19:15:27.798385 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.899386 kubelet[2020]: E1002 19:15:27.899246 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:27.942485 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:15:27.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.951951 kernel: audit: type=1131 audit(1696274127.942:640): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:15:27.977000 audit: BPF prog-id=59 op=UNLOAD Oct 2 19:15:27.977000 audit: BPF prog-id=58 op=UNLOAD Oct 2 19:15:27.983554 kernel: audit: type=1334 audit(1696274127.977:641): prog-id=59 op=UNLOAD Oct 2 19:15:27.983631 kernel: audit: type=1334 audit(1696274127.977:642): prog-id=58 op=UNLOAD Oct 2 19:15:27.977000 audit: BPF prog-id=57 op=UNLOAD Oct 2 19:15:27.986400 kernel: audit: type=1334 audit(1696274127.977:643): prog-id=57 op=UNLOAD Oct 2 19:15:28.000017 kubelet[2020]: E1002 19:15:27.999955 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.100557 kubelet[2020]: E1002 19:15:28.100498 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.200876 kubelet[2020]: E1002 19:15:28.200741 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.301484 kubelet[2020]: E1002 19:15:28.301421 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.402206 kubelet[2020]: E1002 19:15:28.402149 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.502994 kubelet[2020]: E1002 19:15:28.502842 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.603573 kubelet[2020]: E1002 19:15:28.603506 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.681106 kubelet[2020]: E1002 19:15:28.681035 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:28.704681 kubelet[2020]: E1002 19:15:28.704649 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.805837 kubelet[2020]: E1002 19:15:28.805367 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:28.833531 kubelet[2020]: E1002 19:15:28.833499 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:28.906446 kubelet[2020]: E1002 19:15:28.906398 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.008114 kubelet[2020]: E1002 19:15:29.008073 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.109255 kubelet[2020]: E1002 19:15:29.108727 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.210302 kubelet[2020]: E1002 19:15:29.210247 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.310984 kubelet[2020]: E1002 19:15:29.310901 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.412073 kubelet[2020]: E1002 19:15:29.411575 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.513297 kubelet[2020]: E1002 19:15:29.513251 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.613986 kubelet[2020]: E1002 19:15:29.613944 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.681850 kubelet[2020]: E1002 19:15:29.681371 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:29.714741 kubelet[2020]: E1002 19:15:29.714685 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.815790 kubelet[2020]: E1002 19:15:29.815729 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:29.916684 kubelet[2020]: E1002 19:15:29.916621 2020 kubelet.go:2448] "Error getting node" err="node \"172.31.21.101\" not found" Oct 2 19:15:30.017747 kubelet[2020]: I1002 19:15:30.017249 2020 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:15:30.018607 env[1566]: time="2023-10-02T19:15:30.018448879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:15:30.019193 kubelet[2020]: I1002 19:15:30.018757 2020 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:15:30.019810 kubelet[2020]: E1002 19:15:30.019780 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:30.682511 kubelet[2020]: E1002 19:15:30.682468 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:30.682802 kubelet[2020]: I1002 19:15:30.682752 2020 apiserver.go:52] "Watching apiserver" Oct 2 19:15:30.687084 kubelet[2020]: I1002 19:15:30.687032 2020 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:15:30.687261 kubelet[2020]: I1002 19:15:30.687163 2020 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:15:30.699359 systemd[1]: Created slice kubepods-besteffort-pod40f7e1bb_6478_489a_ac8b_414652586014.slice. Oct 2 19:15:30.708165 kubelet[2020]: I1002 19:15:30.708112 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-cgroup\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.708431 kubelet[2020]: I1002 19:15:30.708399 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cni-path\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.708718 kubelet[2020]: I1002 19:15:30.708682 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-clustermesh-secrets\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.708898 kubelet[2020]: I1002 19:15:30.708877 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40f7e1bb-6478-489a-ac8b-414652586014-xtables-lock\") pod \"kube-proxy-ld8lj\" (UID: \"40f7e1bb-6478-489a-ac8b-414652586014\") " pod="kube-system/kube-proxy-ld8lj" Oct 2 19:15:30.709091 kubelet[2020]: I1002 19:15:30.709067 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hubble-tls\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.709265 kubelet[2020]: I1002 19:15:30.709243 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmlb\" (UniqueName: \"kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-kube-api-access-6wmlb\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.709410 kubelet[2020]: I1002 19:15:30.709389 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40f7e1bb-6478-489a-ac8b-414652586014-kube-proxy\") pod \"kube-proxy-ld8lj\" (UID: \"40f7e1bb-6478-489a-ac8b-414652586014\") " pod="kube-system/kube-proxy-ld8lj" Oct 2 19:15:30.709578 kubelet[2020]: I1002 19:15:30.709556 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40f7e1bb-6478-489a-ac8b-414652586014-lib-modules\") pod \"kube-proxy-ld8lj\" (UID: \"40f7e1bb-6478-489a-ac8b-414652586014\") " pod="kube-system/kube-proxy-ld8lj" Oct 2 19:15:30.709734 kubelet[2020]: I1002 19:15:30.709713 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-bpf-maps\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.709876 kubelet[2020]: I1002 19:15:30.709855 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-etc-cni-netd\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.710066 kubelet[2020]: I1002 19:15:30.710041 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-lib-modules\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.710256 kubelet[2020]: I1002 19:15:30.710229 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86xqp\" (UniqueName: \"kubernetes.io/projected/40f7e1bb-6478-489a-ac8b-414652586014-kube-api-access-86xqp\") pod \"kube-proxy-ld8lj\" (UID: \"40f7e1bb-6478-489a-ac8b-414652586014\") " pod="kube-system/kube-proxy-ld8lj" Oct 2 19:15:30.710471 kubelet[2020]: I1002 19:15:30.710443 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-run\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.710662 kubelet[2020]: I1002 19:15:30.710632 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hostproc\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.711189 kubelet[2020]: I1002 19:15:30.711150 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-xtables-lock\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.711448 kubelet[2020]: I1002 19:15:30.711392 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-config-path\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.711621 kubelet[2020]: I1002 19:15:30.711599 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-net\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.714803 kubelet[2020]: I1002 19:15:30.714673 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-kernel\") pod \"cilium-swhzw\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " pod="kube-system/cilium-swhzw" Oct 2 19:15:30.714803 kubelet[2020]: I1002 19:15:30.714744 2020 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:15:30.719530 systemd[1]: Created slice kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice. Oct 2 19:15:31.315203 env[1566]: time="2023-10-02T19:15:31.315140165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ld8lj,Uid:40f7e1bb-6478-489a-ac8b-414652586014,Namespace:kube-system,Attempt:0,}" Oct 2 19:15:31.335679 env[1566]: time="2023-10-02T19:15:31.335584972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swhzw,Uid:1af4ab9f-e101-4cff-a4cc-854aa6d5192f,Namespace:kube-system,Attempt:0,}" Oct 2 19:15:31.682929 kubelet[2020]: E1002 19:15:31.682832 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:31.866605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1561569109.mount: Deactivated successfully. Oct 2 19:15:31.879001 env[1566]: time="2023-10-02T19:15:31.878927624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.880783 env[1566]: time="2023-10-02T19:15:31.880713974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.885941 env[1566]: time="2023-10-02T19:15:31.885854118Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.888185 env[1566]: time="2023-10-02T19:15:31.888125869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.892456 env[1566]: time="2023-10-02T19:15:31.892392206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.893897 env[1566]: time="2023-10-02T19:15:31.893852920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.898154 env[1566]: time="2023-10-02T19:15:31.898074842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.901644 env[1566]: time="2023-10-02T19:15:31.901591423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:31.960887 env[1566]: time="2023-10-02T19:15:31.958032663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:15:31.960887 env[1566]: time="2023-10-02T19:15:31.958113895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:15:31.960887 env[1566]: time="2023-10-02T19:15:31.958139752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:15:31.960887 env[1566]: time="2023-10-02T19:15:31.958713875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38d2ab7fcc07e13e6a570258c2073c2e245cbbeb734e34ae543c95b71a1d3d69 pid=2119 runtime=io.containerd.runc.v2 Oct 2 19:15:31.996655 env[1566]: time="2023-10-02T19:15:31.996391705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:15:32.001773 systemd[1]: Started cri-containerd-38d2ab7fcc07e13e6a570258c2073c2e245cbbeb734e34ae543c95b71a1d3d69.scope. Oct 2 19:15:32.009965 env[1566]: time="2023-10-02T19:15:32.009821031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:15:32.010344 env[1566]: time="2023-10-02T19:15:32.010232452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:15:32.013039 env[1566]: time="2023-10-02T19:15:32.012868520Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d pid=2133 runtime=io.containerd.runc.v2 Oct 2 19:15:32.053122 systemd[1]: Started cri-containerd-e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d.scope. Oct 2 19:15:32.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.069138 kernel: audit: type=1400 audit(1696274132.059:644): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.067000 audit: BPF prog-id=73 op=LOAD Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=2119 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643261623766636330376531336536613537303235386332303733 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=2119 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643261623766636330376531336536613537303235386332303733 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit: BPF prog-id=74 op=LOAD Oct 2 19:15:32.068000 audit[2132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=2119 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643261623766636330376531336536613537303235386332303733 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit: BPF prog-id=75 op=LOAD Oct 2 19:15:32.068000 audit[2132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=2119 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643261623766636330376531336536613537303235386332303733 Oct 2 19:15:32.068000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:15:32.068000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.068000 audit: BPF prog-id=76 op=LOAD Oct 2 19:15:32.068000 audit[2132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=2119 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338643261623766636330376531336536613537303235386332303733 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.111000 audit: BPF prog-id=77 op=LOAD Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2133 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534616337336539653236636565396162353061343339636336653636 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2133 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534616337336539653236636565396162353061343339636336653636 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit: BPF prog-id=78 op=LOAD Oct 2 19:15:32.112000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2133 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534616337336539653236636565396162353061343339636336653636 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.112000 audit: BPF prog-id=79 op=LOAD Oct 2 19:15:32.112000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2133 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534616337336539653236636565396162353061343339636336653636 Oct 2 19:15:32.112000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:15:32.113000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:32.113000 audit: BPF prog-id=80 op=LOAD Oct 2 19:15:32.113000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2133 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:32.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534616337336539653236636565396162353061343339636336653636 Oct 2 19:15:32.120982 env[1566]: time="2023-10-02T19:15:32.120879860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ld8lj,Uid:40f7e1bb-6478-489a-ac8b-414652586014,Namespace:kube-system,Attempt:0,} returns sandbox id \"38d2ab7fcc07e13e6a570258c2073c2e245cbbeb734e34ae543c95b71a1d3d69\"" Oct 2 19:15:32.126773 env[1566]: time="2023-10-02T19:15:32.126701209Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:15:32.159304 env[1566]: time="2023-10-02T19:15:32.159247555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swhzw,Uid:1af4ab9f-e101-4cff-a4cc-854aa6d5192f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\"" Oct 2 19:15:32.683578 kubelet[2020]: E1002 19:15:32.683506 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.449004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82445245.mount: Deactivated successfully. Oct 2 19:15:33.670238 kubelet[2020]: E1002 19:15:33.670147 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.684495 kubelet[2020]: E1002 19:15:33.684427 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.836714 kubelet[2020]: E1002 19:15:33.836580 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:34.090564 env[1566]: time="2023-10-02T19:15:34.090396816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.094266 env[1566]: time="2023-10-02T19:15:34.094193403Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.098235 env[1566]: time="2023-10-02T19:15:34.098179488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.100310 env[1566]: time="2023-10-02T19:15:34.100242613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:34.101206 env[1566]: time="2023-10-02T19:15:34.101147899Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:15:34.102862 env[1566]: time="2023-10-02T19:15:34.102791684Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:15:34.108785 env[1566]: time="2023-10-02T19:15:34.108710969Z" level=info msg="CreateContainer within sandbox \"38d2ab7fcc07e13e6a570258c2073c2e245cbbeb734e34ae543c95b71a1d3d69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:15:34.130541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727291538.mount: Deactivated successfully. Oct 2 19:15:34.139963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020725715.mount: Deactivated successfully. Oct 2 19:15:34.148941 env[1566]: time="2023-10-02T19:15:34.148703920Z" level=info msg="CreateContainer within sandbox \"38d2ab7fcc07e13e6a570258c2073c2e245cbbeb734e34ae543c95b71a1d3d69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc997568065c5c61ce122894d2b885b2fcc562b109d3654685767f5e8f00c322\"" Oct 2 19:15:34.151664 env[1566]: time="2023-10-02T19:15:34.151572199Z" level=info msg="StartContainer for \"dc997568065c5c61ce122894d2b885b2fcc562b109d3654685767f5e8f00c322\"" Oct 2 19:15:34.204215 systemd[1]: Started cri-containerd-dc997568065c5c61ce122894d2b885b2fcc562b109d3654685767f5e8f00c322.scope. Oct 2 19:15:34.263415 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 19:15:34.263534 kernel: audit: type=1400 audit(1696274134.251:680): avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2119 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463393937353638303635633563363163653132323839346432623838 Oct 2 19:15:34.289198 kernel: audit: type=1300 audit(1696274134.251:680): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2119 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.289346 kernel: audit: type=1327 audit(1696274134.251:680): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463393937353638303635633563363163653132323839346432623838 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.301404 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.309959 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.317823 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.317963 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.333192 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.333314 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.349449 kernel: audit: type=1400 audit(1696274134.251:681): avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit: BPF prog-id=81 op=LOAD Oct 2 19:15:34.251000 audit[2196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2119 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463393937353638303635633563363163653132323839346432623838 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.251000 audit: BPF prog-id=82 op=LOAD Oct 2 19:15:34.251000 audit[2196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2119 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463393937353638303635633563363163653132323839346432623838 Oct 2 19:15:34.262000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:15:34.262000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { perfmon } for pid=2196 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit[2196]: AVC avc: denied { bpf } for pid=2196 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:15:34.262000 audit: BPF prog-id=83 op=LOAD Oct 2 19:15:34.262000 audit[2196]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2119 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463393937353638303635633563363163653132323839346432623838 Oct 2 19:15:34.353551 env[1566]: time="2023-10-02T19:15:34.353477189Z" level=info msg="StartContainer for \"dc997568065c5c61ce122894d2b885b2fcc562b109d3654685767f5e8f00c322\" returns successfully" Oct 2 19:15:34.432573 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:15:34.432696 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:15:34.432738 kernel: IPVS: ipvs loaded. Oct 2 19:15:34.449054 kernel: IPVS: [rr] scheduler registered. Oct 2 19:15:34.461026 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:15:34.474959 kernel: IPVS: [sh] scheduler registered. Oct 2 19:15:34.578000 audit[2255]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.578000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1d75560 a2=0 a3=ffff978606c0 items=0 ppid=2207 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:15:34.582000 audit[2256]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.582000 audit[2256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb0c6340 a2=0 a3=ffffa17666c0 items=0 ppid=2207 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:15:34.584000 audit[2257]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.584000 audit[2257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffeb8db80 a2=0 a3=ffffb195a6c0 items=0 ppid=2207 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:15:34.588000 audit[2258]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.588000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe64b2530 a2=0 a3=ffff81b5a6c0 items=0 ppid=2207 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:15:34.592000 audit[2260]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.592000 audit[2260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe694de10 a2=0 a3=ffffb0a406c0 items=0 ppid=2207 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:15:34.595000 audit[2261]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.595000 audit[2261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4adc0e0 a2=0 a3=ffffba9df6c0 items=0 ppid=2207 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:15:34.685428 kubelet[2020]: E1002 19:15:34.685313 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:34.690000 audit[2262]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.690000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcd846720 a2=0 a3=ffffa49116c0 items=0 ppid=2207 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:15:34.699000 audit[2264]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.699000 audit[2264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe22a9d40 a2=0 a3=ffffb769d6c0 items=0 ppid=2207 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:15:34.712000 audit[2267]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.712000 audit[2267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc1346300 a2=0 a3=ffffb4b776c0 items=0 ppid=2207 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:15:34.717000 audit[2268]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.717000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa6def50 a2=0 a3=ffffab7836c0 items=0 ppid=2207 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:15:34.725000 audit[2270]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.725000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd344a570 a2=0 a3=ffff97dd26c0 items=0 ppid=2207 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:15:34.729000 audit[2271]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.729000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd53896f0 a2=0 a3=ffff864566c0 items=0 ppid=2207 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:15:34.739000 audit[2273]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.739000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcab839d0 a2=0 a3=ffffb7d896c0 items=0 ppid=2207 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:15:34.751000 audit[2276]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.751000 audit[2276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff2556c70 a2=0 a3=ffff9b5b66c0 items=0 ppid=2207 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:15:34.754000 audit[2277]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.754000 audit[2277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd825c080 a2=0 a3=ffff866606c0 items=0 ppid=2207 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:15:34.763000 audit[2279]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.763000 audit[2279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd97acca0 a2=0 a3=ffffa48716c0 items=0 ppid=2207 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:15:34.766000 audit[2280]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.766000 audit[2280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9b32ab0 a2=0 a3=ffff9d9dd6c0 items=0 ppid=2207 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:15:34.775000 audit[2282]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.775000 audit[2282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6374480 a2=0 a3=ffffa9c8a6c0 items=0 ppid=2207 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:34.787000 audit[2285]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.787000 audit[2285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4121520 a2=0 a3=ffff901406c0 items=0 ppid=2207 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:34.799000 audit[2288]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.799000 audit[2288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffbd86ce0 a2=0 a3=ffff8ddbe6c0 items=0 ppid=2207 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:15:34.804000 audit[2289]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.804000 audit[2289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0dcc080 a2=0 a3=ffffa2ae86c0 items=0 ppid=2207 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:15:34.813000 audit[2291]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.813000 audit[2291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff927a380 a2=0 a3=ffff971af6c0 items=0 ppid=2207 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:34.826000 audit[2294]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:15:34.826000 audit[2294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc76ccc80 a2=0 a3=ffffa88106c0 items=0 ppid=2207 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:34.854000 audit[2298]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:15:34.854000 audit[2298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd3f1d880 a2=0 a3=ffff8f5906c0 items=0 ppid=2207 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:34.871000 audit[2298]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:15:34.871000 audit[2298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd3f1d880 a2=0 a3=ffff8f5906c0 items=0 ppid=2207 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:34.881000 audit[2302]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.881000 audit[2302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd1691b60 a2=0 a3=ffff910b66c0 items=0 ppid=2207 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:15:34.891000 audit[2304]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.891000 audit[2304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd9d5f240 a2=0 a3=ffffa8b446c0 items=0 ppid=2207 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:15:34.903000 audit[2307]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.903000 audit[2307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffde2ae410 a2=0 a3=ffff9f4f16c0 items=0 ppid=2207 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:15:34.907000 audit[2308]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.907000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe166c20 a2=0 a3=ffff8e4796c0 items=0 ppid=2207 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:15:34.915000 audit[2310]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.915000 audit[2310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc16934e0 a2=0 a3=ffffbd0906c0 items=0 ppid=2207 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:15:34.919000 audit[2311]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.919000 audit[2311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebf44c40 a2=0 a3=ffffbd6b46c0 items=0 ppid=2207 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:15:34.928000 audit[2313]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.928000 audit[2313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6840a80 a2=0 a3=ffff824d06c0 items=0 ppid=2207 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:15:34.940000 audit[2316]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.940000 audit[2316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc1845ba0 a2=0 a3=ffff848506c0 items=0 ppid=2207 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:15:34.947000 audit[2317]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.947000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4104a10 a2=0 a3=ffff83f356c0 items=0 ppid=2207 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:15:34.956000 audit[2319]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.956000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe6475f90 a2=0 a3=ffff839396c0 items=0 ppid=2207 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:15:34.960000 audit[2320]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.960000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0072090 a2=0 a3=ffffb3ad06c0 items=0 ppid=2207 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:15:34.971000 audit[2322]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.971000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5537bc0 a2=0 a3=ffff924586c0 items=0 ppid=2207 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:15:34.983000 audit[2325]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.983000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1c374c0 a2=0 a3=ffff9dce56c0 items=0 ppid=2207 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:15:34.998000 audit[2328]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:34.998000 audit[2328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe29d370 a2=0 a3=ffff8a6d46c0 items=0 ppid=2207 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:34.998000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:15:35.002000 audit[2329]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.002000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff6114470 a2=0 a3=ffffb2a876c0 items=0 ppid=2207 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:15:35.010000 audit[2331]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.010000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd6ed4e80 a2=0 a3=ffffa807c6c0 items=0 ppid=2207 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:35.022000 audit[2334]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:15:35.022000 audit[2334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffa0c5870 a2=0 a3=ffff891fc6c0 items=0 ppid=2207 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:15:35.040000 audit[2338]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:15:35.040000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff7e746b0 a2=0 a3=ffffa112d6c0 items=0 ppid=2207 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.040000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:35.041000 audit[2338]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:15:35.041000 audit[2338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=fffff7e746b0 a2=0 a3=ffffa112d6c0 items=0 ppid=2207 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:15:35.041000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:15:35.686338 kubelet[2020]: E1002 19:15:35.686244 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.686986 kubelet[2020]: E1002 19:15:36.686933 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:37.688073 kubelet[2020]: E1002 19:15:37.688008 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.688508 kubelet[2020]: E1002 19:15:38.688452 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.838006 kubelet[2020]: E1002 19:15:38.837959 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:39.688726 kubelet[2020]: E1002 19:15:39.688658 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:40.688860 kubelet[2020]: E1002 19:15:40.688770 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:41.320774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994528855.mount: Deactivated successfully. Oct 2 19:15:41.688999 kubelet[2020]: E1002 19:15:41.688898 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:42.409045 update_engine[1547]: I1002 19:15:42.408972 1547 update_attempter.cc:505] Updating boot flags... Oct 2 19:15:42.689988 kubelet[2020]: E1002 19:15:42.689499 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:43.690734 kubelet[2020]: E1002 19:15:43.690693 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:43.839368 kubelet[2020]: E1002 19:15:43.839309 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:44.692997 kubelet[2020]: E1002 19:15:44.692934 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:45.558205 env[1566]: time="2023-10-02T19:15:45.558143653Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:45.560740 env[1566]: time="2023-10-02T19:15:45.560685634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:45.563731 env[1566]: time="2023-10-02T19:15:45.563679622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:15:45.564886 env[1566]: time="2023-10-02T19:15:45.564832336Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:15:45.569366 env[1566]: time="2023-10-02T19:15:45.569300791Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:15:45.586046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817808581.mount: Deactivated successfully. Oct 2 19:15:45.595338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451445677.mount: Deactivated successfully. Oct 2 19:15:45.606353 env[1566]: time="2023-10-02T19:15:45.606290065Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\"" Oct 2 19:15:45.607475 env[1566]: time="2023-10-02T19:15:45.607427464Z" level=info msg="StartContainer for \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\"" Oct 2 19:15:45.659224 systemd[1]: Started cri-containerd-bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027.scope. Oct 2 19:15:45.693324 kubelet[2020]: E1002 19:15:45.693244 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:45.695147 systemd[1]: cri-containerd-bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027.scope: Deactivated successfully. Oct 2 19:15:46.580616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027-rootfs.mount: Deactivated successfully. Oct 2 19:15:46.694054 kubelet[2020]: E1002 19:15:46.693981 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:47.163781 env[1566]: time="2023-10-02T19:15:47.163684708Z" level=info msg="shim disconnected" id=bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027 Oct 2 19:15:47.163781 env[1566]: time="2023-10-02T19:15:47.163774131Z" level=warning msg="cleaning up after shim disconnected" id=bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027 namespace=k8s.io Oct 2 19:15:47.164474 env[1566]: time="2023-10-02T19:15:47.163797510Z" level=info msg="cleaning up dead shim" Oct 2 19:15:47.188887 env[1566]: time="2023-10-02T19:15:47.188799545Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2549 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:47.189452 env[1566]: time="2023-10-02T19:15:47.189295086Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:15:47.189979 env[1566]: time="2023-10-02T19:15:47.189881479Z" level=error msg="Failed to pipe stderr of container \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\"" error="reading from a closed fifo" Oct 2 19:15:47.194704 env[1566]: time="2023-10-02T19:15:47.194627896Z" level=error msg="Failed to pipe stdout of container \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\"" error="reading from a closed fifo" Oct 2 19:15:47.200118 env[1566]: time="2023-10-02T19:15:47.200002879Z" level=error msg="StartContainer for \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:47.200978 kubelet[2020]: E1002 19:15:47.200627 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027" Oct 2 19:15:47.200978 kubelet[2020]: E1002 19:15:47.200821 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:47.200978 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:47.200978 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:15:47.201329 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6wmlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:47.201570 kubelet[2020]: E1002 19:15:47.200982 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:15:47.264725 env[1566]: time="2023-10-02T19:15:47.264661465Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:15:47.285833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142403369.mount: Deactivated successfully. Oct 2 19:15:47.302064 env[1566]: time="2023-10-02T19:15:47.301987520Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\"" Oct 2 19:15:47.303282 env[1566]: time="2023-10-02T19:15:47.303186972Z" level=info msg="StartContainer for \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\"" Oct 2 19:15:47.354352 systemd[1]: Started cri-containerd-59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38.scope. Oct 2 19:15:47.389990 systemd[1]: cri-containerd-59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38.scope: Deactivated successfully. Oct 2 19:15:47.411497 env[1566]: time="2023-10-02T19:15:47.411417388Z" level=info msg="shim disconnected" id=59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38 Oct 2 19:15:47.411985 env[1566]: time="2023-10-02T19:15:47.411946581Z" level=warning msg="cleaning up after shim disconnected" id=59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38 namespace=k8s.io Oct 2 19:15:47.412122 env[1566]: time="2023-10-02T19:15:47.412091115Z" level=info msg="cleaning up dead shim" Oct 2 19:15:47.438448 env[1566]: time="2023-10-02T19:15:47.438289554Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2586 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:47.439895 env[1566]: time="2023-10-02T19:15:47.439785490Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:15:47.443096 env[1566]: time="2023-10-02T19:15:47.443018065Z" level=error msg="Failed to pipe stdout of container \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\"" error="reading from a closed fifo" Oct 2 19:15:47.443242 env[1566]: time="2023-10-02T19:15:47.443144321Z" level=error msg="Failed to pipe stderr of container \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\"" error="reading from a closed fifo" Oct 2 19:15:47.445637 env[1566]: time="2023-10-02T19:15:47.445570912Z" level=error msg="StartContainer for \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:47.446756 kubelet[2020]: E1002 19:15:47.446097 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38" Oct 2 19:15:47.446756 kubelet[2020]: E1002 19:15:47.446229 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:47.446756 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:47.446756 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:15:47.447255 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6wmlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:47.447536 kubelet[2020]: E1002 19:15:47.446285 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:15:47.581031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38-rootfs.mount: Deactivated successfully. Oct 2 19:15:47.694451 kubelet[2020]: E1002 19:15:47.694264 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:48.264897 kubelet[2020]: I1002 19:15:48.264861 2020 scope.go:115] "RemoveContainer" containerID="bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027" Oct 2 19:15:48.265529 kubelet[2020]: I1002 19:15:48.265458 2020 scope.go:115] "RemoveContainer" containerID="bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027" Oct 2 19:15:48.268209 env[1566]: time="2023-10-02T19:15:48.268151018Z" level=info msg="RemoveContainer for \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\"" Oct 2 19:15:48.271447 env[1566]: time="2023-10-02T19:15:48.269664321Z" level=info msg="RemoveContainer for \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\"" Oct 2 19:15:48.271791 env[1566]: time="2023-10-02T19:15:48.271730732Z" level=error msg="RemoveContainer for \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\" failed" error="failed to set removing state for container \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\": container is already in removing state" Oct 2 19:15:48.272377 kubelet[2020]: E1002 19:15:48.272340 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\": container is already in removing state" containerID="bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027" Oct 2 19:15:48.272655 kubelet[2020]: E1002 19:15:48.272625 2020 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027": container is already in removing state; Skipping pod "cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)" Oct 2 19:15:48.273702 kubelet[2020]: E1002 19:15:48.273665 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:15:48.275955 env[1566]: time="2023-10-02T19:15:48.275863518Z" level=info msg="RemoveContainer for \"bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027\" returns successfully" Oct 2 19:15:48.695113 kubelet[2020]: E1002 19:15:48.695044 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:48.840607 kubelet[2020]: E1002 19:15:48.840574 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:49.269696 kubelet[2020]: E1002 19:15:49.269653 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:15:49.695634 kubelet[2020]: E1002 19:15:49.695591 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:50.270513 kubelet[2020]: W1002 19:15:50.270450 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice/cri-containerd-bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027.scope WatchSource:0}: container "bd9bd9e48a058168a5e70417f1a73b8f698b3ef07353d59402b68008147b5027" in namespace "k8s.io": not found Oct 2 19:15:50.697201 kubelet[2020]: E1002 19:15:50.697156 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:51.698854 kubelet[2020]: E1002 19:15:51.698790 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:52.699824 kubelet[2020]: E1002 19:15:52.699777 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:53.380129 kubelet[2020]: W1002 19:15:53.380060 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice/cri-containerd-59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38.scope WatchSource:0}: task 59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38 not found: not found Oct 2 19:15:53.670199 kubelet[2020]: E1002 19:15:53.670050 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:53.701687 kubelet[2020]: E1002 19:15:53.701615 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:53.841529 kubelet[2020]: E1002 19:15:53.841496 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:54.702235 kubelet[2020]: E1002 19:15:54.702189 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:55.703571 kubelet[2020]: E1002 19:15:55.703523 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:56.705397 kubelet[2020]: E1002 19:15:56.705326 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:57.705931 kubelet[2020]: E1002 19:15:57.705855 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:58.707639 kubelet[2020]: E1002 19:15:58.707568 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:58.843647 kubelet[2020]: E1002 19:15:58.843593 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:59.708429 kubelet[2020]: E1002 19:15:59.708355 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:00.709445 kubelet[2020]: E1002 19:16:00.709374 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:01.710342 kubelet[2020]: E1002 19:16:01.710293 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:02.711593 kubelet[2020]: E1002 19:16:02.711530 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:03.712441 kubelet[2020]: E1002 19:16:03.712372 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:03.844457 kubelet[2020]: E1002 19:16:03.844395 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:04.713295 kubelet[2020]: E1002 19:16:04.713240 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:05.153382 env[1566]: time="2023-10-02T19:16:05.153078608Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:16:05.171419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965407307.mount: Deactivated successfully. Oct 2 19:16:05.181486 env[1566]: time="2023-10-02T19:16:05.181420920Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\"" Oct 2 19:16:05.183165 env[1566]: time="2023-10-02T19:16:05.183112339Z" level=info msg="StartContainer for \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\"" Oct 2 19:16:05.244210 systemd[1]: Started cri-containerd-891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7.scope. Oct 2 19:16:05.280460 systemd[1]: cri-containerd-891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7.scope: Deactivated successfully. Oct 2 19:16:05.303084 env[1566]: time="2023-10-02T19:16:05.302972312Z" level=info msg="shim disconnected" id=891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7 Oct 2 19:16:05.303353 env[1566]: time="2023-10-02T19:16:05.303106925Z" level=warning msg="cleaning up after shim disconnected" id=891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7 namespace=k8s.io Oct 2 19:16:05.303353 env[1566]: time="2023-10-02T19:16:05.303130279Z" level=info msg="cleaning up dead shim" Oct 2 19:16:05.329887 env[1566]: time="2023-10-02T19:16:05.329798152Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2627 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:05.330361 env[1566]: time="2023-10-02T19:16:05.330263568Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:16:05.331131 env[1566]: time="2023-10-02T19:16:05.331073623Z" level=error msg="Failed to pipe stdout of container \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\"" error="reading from a closed fifo" Oct 2 19:16:05.334147 env[1566]: time="2023-10-02T19:16:05.334065932Z" level=error msg="Failed to pipe stderr of container \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\"" error="reading from a closed fifo" Oct 2 19:16:05.336465 env[1566]: time="2023-10-02T19:16:05.336386854Z" level=error msg="StartContainer for \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:05.336718 kubelet[2020]: E1002 19:16:05.336682 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7" Oct 2 19:16:05.336881 kubelet[2020]: E1002 19:16:05.336826 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:05.336881 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:05.336881 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:16:05.336881 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6wmlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:05.337218 kubelet[2020]: E1002 19:16:05.336885 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:16:05.713671 kubelet[2020]: E1002 19:16:05.713608 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:06.165432 systemd[1]: run-containerd-runc-k8s.io-891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7-runc.n7qODB.mount: Deactivated successfully. Oct 2 19:16:06.165608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7-rootfs.mount: Deactivated successfully. Oct 2 19:16:06.309644 kubelet[2020]: I1002 19:16:06.309611 2020 scope.go:115] "RemoveContainer" containerID="59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38" Oct 2 19:16:06.310379 kubelet[2020]: I1002 19:16:06.310353 2020 scope.go:115] "RemoveContainer" containerID="59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38" Oct 2 19:16:06.314466 env[1566]: time="2023-10-02T19:16:06.313992053Z" level=info msg="RemoveContainer for \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\"" Oct 2 19:16:06.317401 env[1566]: time="2023-10-02T19:16:06.316858803Z" level=info msg="RemoveContainer for \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\"" Oct 2 19:16:06.317401 env[1566]: time="2023-10-02T19:16:06.317125293Z" level=error msg="RemoveContainer for \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\" failed" error="failed to set removing state for container \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\": container is already in removing state" Oct 2 19:16:06.317668 kubelet[2020]: E1002 19:16:06.317441 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\": container is already in removing state" containerID="59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38" Oct 2 19:16:06.317668 kubelet[2020]: E1002 19:16:06.317501 2020 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38": container is already in removing state; Skipping pod "cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)" Oct 2 19:16:06.318075 kubelet[2020]: E1002 19:16:06.318017 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:16:06.321227 env[1566]: time="2023-10-02T19:16:06.321158401Z" level=info msg="RemoveContainer for \"59eb11f6195fc4078cf9017d14790c3cce958b5583433e34c30b5a44cec69e38\" returns successfully" Oct 2 19:16:06.714538 kubelet[2020]: E1002 19:16:06.714470 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:07.715111 kubelet[2020]: E1002 19:16:07.715033 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:08.408807 kubelet[2020]: W1002 19:16:08.408757 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice/cri-containerd-891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7.scope WatchSource:0}: task 891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7 not found: not found Oct 2 19:16:08.716424 kubelet[2020]: E1002 19:16:08.716285 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:08.846040 kubelet[2020]: E1002 19:16:08.846007 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:09.718069 kubelet[2020]: E1002 19:16:09.717990 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:10.718414 kubelet[2020]: E1002 19:16:10.718342 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:11.718607 kubelet[2020]: E1002 19:16:11.718543 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:12.719674 kubelet[2020]: E1002 19:16:12.719609 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:13.670011 kubelet[2020]: E1002 19:16:13.669965 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:13.720092 kubelet[2020]: E1002 19:16:13.720030 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:13.847485 kubelet[2020]: E1002 19:16:13.847442 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:14.721640 kubelet[2020]: E1002 19:16:14.721595 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:15.722974 kubelet[2020]: E1002 19:16:15.722899 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:16.723623 kubelet[2020]: E1002 19:16:16.723556 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:17.723939 kubelet[2020]: E1002 19:16:17.723877 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:18.724347 kubelet[2020]: E1002 19:16:18.724278 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:18.848471 kubelet[2020]: E1002 19:16:18.848403 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:19.724786 kubelet[2020]: E1002 19:16:19.724718 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:20.724942 kubelet[2020]: E1002 19:16:20.724868 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:21.148796 kubelet[2020]: E1002 19:16:21.148741 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:16:21.726730 kubelet[2020]: E1002 19:16:21.726663 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:22.727727 kubelet[2020]: E1002 19:16:22.727663 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:23.728041 kubelet[2020]: E1002 19:16:23.727995 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:23.850207 kubelet[2020]: E1002 19:16:23.850170 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:24.728867 kubelet[2020]: E1002 19:16:24.728792 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:25.729627 kubelet[2020]: E1002 19:16:25.729580 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:26.731322 kubelet[2020]: E1002 19:16:26.731274 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:27.732657 kubelet[2020]: E1002 19:16:27.732612 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:28.734397 kubelet[2020]: E1002 19:16:28.734330 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:28.851928 kubelet[2020]: E1002 19:16:28.851859 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:29.734541 kubelet[2020]: E1002 19:16:29.734466 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:30.734704 kubelet[2020]: E1002 19:16:30.734630 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:31.735472 kubelet[2020]: E1002 19:16:31.735404 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:32.735672 kubelet[2020]: E1002 19:16:32.735630 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:33.669624 kubelet[2020]: E1002 19:16:33.669554 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:33.737321 kubelet[2020]: E1002 19:16:33.737254 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:33.852794 kubelet[2020]: E1002 19:16:33.852761 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:34.738004 kubelet[2020]: E1002 19:16:34.737942 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:35.152642 env[1566]: time="2023-10-02T19:16:35.152578979Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:16:35.169732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136877861.mount: Deactivated successfully. Oct 2 19:16:35.183156 env[1566]: time="2023-10-02T19:16:35.183074462Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\"" Oct 2 19:16:35.184420 env[1566]: time="2023-10-02T19:16:35.184360182Z" level=info msg="StartContainer for \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\"" Oct 2 19:16:35.236874 systemd[1]: Started cri-containerd-2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7.scope. Oct 2 19:16:35.274817 systemd[1]: cri-containerd-2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7.scope: Deactivated successfully. Oct 2 19:16:35.294816 env[1566]: time="2023-10-02T19:16:35.294725434Z" level=info msg="shim disconnected" id=2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7 Oct 2 19:16:35.294816 env[1566]: time="2023-10-02T19:16:35.294802392Z" level=warning msg="cleaning up after shim disconnected" id=2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7 namespace=k8s.io Oct 2 19:16:35.294816 env[1566]: time="2023-10-02T19:16:35.294826021Z" level=info msg="cleaning up dead shim" Oct 2 19:16:35.320562 env[1566]: time="2023-10-02T19:16:35.320489119Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:16:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2669 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:16:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:16:35.321099 env[1566]: time="2023-10-02T19:16:35.321005399Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:16:35.321453 env[1566]: time="2023-10-02T19:16:35.321393911Z" level=error msg="Failed to pipe stdout of container \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\"" error="reading from a closed fifo" Oct 2 19:16:35.325141 env[1566]: time="2023-10-02T19:16:35.325070623Z" level=error msg="Failed to pipe stderr of container \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\"" error="reading from a closed fifo" Oct 2 19:16:35.327613 env[1566]: time="2023-10-02T19:16:35.327529861Z" level=error msg="StartContainer for \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:16:35.328343 kubelet[2020]: E1002 19:16:35.328071 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7" Oct 2 19:16:35.328343 kubelet[2020]: E1002 19:16:35.328218 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:16:35.328343 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:16:35.328343 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:16:35.328743 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6wmlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:16:35.328889 kubelet[2020]: E1002 19:16:35.328302 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:16:35.372694 kubelet[2020]: I1002 19:16:35.372642 2020 scope.go:115] "RemoveContainer" containerID="891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7" Oct 2 19:16:35.373427 kubelet[2020]: I1002 19:16:35.373379 2020 scope.go:115] "RemoveContainer" containerID="891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7" Oct 2 19:16:35.376713 env[1566]: time="2023-10-02T19:16:35.376651659Z" level=info msg="RemoveContainer for \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\"" Oct 2 19:16:35.378476 env[1566]: time="2023-10-02T19:16:35.378398434Z" level=info msg="RemoveContainer for \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\"" Oct 2 19:16:35.378924 env[1566]: time="2023-10-02T19:16:35.378825540Z" level=error msg="RemoveContainer for \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\" failed" error="failed to set removing state for container \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\": container is already in removing state" Oct 2 19:16:35.379968 kubelet[2020]: E1002 19:16:35.379390 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\": container is already in removing state" containerID="891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7" Oct 2 19:16:35.379968 kubelet[2020]: E1002 19:16:35.379449 2020 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7": container is already in removing state; Skipping pod "cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)" Oct 2 19:16:35.379968 kubelet[2020]: E1002 19:16:35.379889 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:16:35.382658 env[1566]: time="2023-10-02T19:16:35.382585331Z" level=info msg="RemoveContainer for \"891785b7dba86c33a6d7e1cd9e08c5751a51eb6c25279892baddea4903962eb7\" returns successfully" Oct 2 19:16:35.739072 kubelet[2020]: E1002 19:16:35.738991 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:36.165650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7-rootfs.mount: Deactivated successfully. Oct 2 19:16:36.739552 kubelet[2020]: E1002 19:16:36.739491 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:37.740057 kubelet[2020]: E1002 19:16:37.739991 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:38.400944 kubelet[2020]: W1002 19:16:38.400866 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice/cri-containerd-2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7.scope WatchSource:0}: task 2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7 not found: not found Oct 2 19:16:38.741409 kubelet[2020]: E1002 19:16:38.741001 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:38.854629 kubelet[2020]: E1002 19:16:38.854528 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:39.742384 kubelet[2020]: E1002 19:16:39.742318 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:40.743074 kubelet[2020]: E1002 19:16:40.743025 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:41.744586 kubelet[2020]: E1002 19:16:41.744537 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:42.746295 kubelet[2020]: E1002 19:16:42.746248 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:43.747242 kubelet[2020]: E1002 19:16:43.747196 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:43.856435 kubelet[2020]: E1002 19:16:43.856402 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:44.748900 kubelet[2020]: E1002 19:16:44.748834 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:45.749242 kubelet[2020]: E1002 19:16:45.749194 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:46.149110 kubelet[2020]: E1002 19:16:46.149058 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:16:46.750810 kubelet[2020]: E1002 19:16:46.750762 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:47.752199 kubelet[2020]: E1002 19:16:47.752101 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:48.753075 kubelet[2020]: E1002 19:16:48.753026 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:48.858544 kubelet[2020]: E1002 19:16:48.858490 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:49.753930 kubelet[2020]: E1002 19:16:49.753833 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:50.754093 kubelet[2020]: E1002 19:16:50.754046 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:51.755713 kubelet[2020]: E1002 19:16:51.755644 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:52.756128 kubelet[2020]: E1002 19:16:52.756083 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.670180 kubelet[2020]: E1002 19:16:53.670133 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.757844 kubelet[2020]: E1002 19:16:53.757769 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:53.859738 kubelet[2020]: E1002 19:16:53.859695 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:54.758408 kubelet[2020]: E1002 19:16:54.758361 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:55.759730 kubelet[2020]: E1002 19:16:55.759667 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:56.759900 kubelet[2020]: E1002 19:16:56.759839 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:57.760401 kubelet[2020]: E1002 19:16:57.760336 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:58.760684 kubelet[2020]: E1002 19:16:58.760640 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:16:58.861384 kubelet[2020]: E1002 19:16:58.861343 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:16:59.762013 kubelet[2020]: E1002 19:16:59.761949 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:00.763031 kubelet[2020]: E1002 19:17:00.762987 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:01.149054 kubelet[2020]: E1002 19:17:01.149003 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:17:01.764439 kubelet[2020]: E1002 19:17:01.764372 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:02.765296 kubelet[2020]: E1002 19:17:02.765248 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:03.766930 kubelet[2020]: E1002 19:17:03.766854 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:03.862465 kubelet[2020]: E1002 19:17:03.862429 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:04.767416 kubelet[2020]: E1002 19:17:04.767343 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:05.767654 kubelet[2020]: E1002 19:17:05.767582 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:06.768064 kubelet[2020]: E1002 19:17:06.767997 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:07.768490 kubelet[2020]: E1002 19:17:07.768421 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:08.768937 kubelet[2020]: E1002 19:17:08.768863 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:08.864369 kubelet[2020]: E1002 19:17:08.864321 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:09.769992 kubelet[2020]: E1002 19:17:09.769947 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:10.771398 kubelet[2020]: E1002 19:17:10.771324 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:11.772558 kubelet[2020]: E1002 19:17:11.772511 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:12.773968 kubelet[2020]: E1002 19:17:12.773888 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.670384 kubelet[2020]: E1002 19:17:13.670318 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.774115 kubelet[2020]: E1002 19:17:13.774038 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:13.865324 kubelet[2020]: E1002 19:17:13.865270 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:14.149363 kubelet[2020]: E1002 19:17:14.149309 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:17:14.775091 kubelet[2020]: E1002 19:17:14.775047 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:15.776233 kubelet[2020]: E1002 19:17:15.776163 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:16.776623 kubelet[2020]: E1002 19:17:16.776571 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:17.777521 kubelet[2020]: E1002 19:17:17.777456 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.778638 kubelet[2020]: E1002 19:17:18.778574 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:18.866644 kubelet[2020]: E1002 19:17:18.866590 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:19.779312 kubelet[2020]: E1002 19:17:19.779243 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:20.780360 kubelet[2020]: E1002 19:17:20.780294 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:21.781005 kubelet[2020]: E1002 19:17:21.780862 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:22.782014 kubelet[2020]: E1002 19:17:22.781973 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:23.783436 kubelet[2020]: E1002 19:17:23.783371 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:23.867605 kubelet[2020]: E1002 19:17:23.867564 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:24.783812 kubelet[2020]: E1002 19:17:24.783745 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:25.784272 kubelet[2020]: E1002 19:17:25.784199 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:26.154046 env[1566]: time="2023-10-02T19:17:26.153970602Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:17:26.172432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280048722.mount: Deactivated successfully. Oct 2 19:17:26.184701 env[1566]: time="2023-10-02T19:17:26.184594330Z" level=info msg="CreateContainer within sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\"" Oct 2 19:17:26.186387 env[1566]: time="2023-10-02T19:17:26.186315992Z" level=info msg="StartContainer for \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\"" Oct 2 19:17:26.234822 systemd[1]: Started cri-containerd-e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5.scope. Oct 2 19:17:26.273611 systemd[1]: cri-containerd-e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5.scope: Deactivated successfully. Oct 2 19:17:26.291872 env[1566]: time="2023-10-02T19:17:26.291804714Z" level=info msg="shim disconnected" id=e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5 Oct 2 19:17:26.292304 env[1566]: time="2023-10-02T19:17:26.292268544Z" level=warning msg="cleaning up after shim disconnected" id=e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5 namespace=k8s.io Oct 2 19:17:26.292431 env[1566]: time="2023-10-02T19:17:26.292402886Z" level=info msg="cleaning up dead shim" Oct 2 19:17:26.318896 env[1566]: time="2023-10-02T19:17:26.318833771Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:17:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2711 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:17:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:17:26.319602 env[1566]: time="2023-10-02T19:17:26.319523792Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:17:26.324055 env[1566]: time="2023-10-02T19:17:26.323983999Z" level=error msg="Failed to pipe stderr of container \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\"" error="reading from a closed fifo" Oct 2 19:17:26.324311 env[1566]: time="2023-10-02T19:17:26.324112028Z" level=error msg="Failed to pipe stdout of container \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\"" error="reading from a closed fifo" Oct 2 19:17:26.326502 env[1566]: time="2023-10-02T19:17:26.326390702Z" level=error msg="StartContainer for \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:17:26.327488 kubelet[2020]: E1002 19:17:26.326818 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5" Oct 2 19:17:26.327488 kubelet[2020]: E1002 19:17:26.326988 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:17:26.327488 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:17:26.327488 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:17:26.328031 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6wmlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:17:26.328184 kubelet[2020]: E1002 19:17:26.327048 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:17:26.481237 kubelet[2020]: I1002 19:17:26.481119 2020 scope.go:115] "RemoveContainer" containerID="2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7" Oct 2 19:17:26.482588 kubelet[2020]: I1002 19:17:26.482553 2020 scope.go:115] "RemoveContainer" containerID="2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7" Oct 2 19:17:26.485143 env[1566]: time="2023-10-02T19:17:26.485056282Z" level=info msg="RemoveContainer for \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\"" Oct 2 19:17:26.486182 env[1566]: time="2023-10-02T19:17:26.486129228Z" level=info msg="RemoveContainer for \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\"" Oct 2 19:17:26.488373 env[1566]: time="2023-10-02T19:17:26.488197491Z" level=error msg="RemoveContainer for \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\" failed" error="rpc error: code = NotFound desc = get container info: container \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\" in namespace \"k8s.io\": not found" Oct 2 19:17:26.489617 kubelet[2020]: E1002 19:17:26.489565 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\" in namespace \"k8s.io\": not found" containerID="2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7" Oct 2 19:17:26.489857 kubelet[2020]: E1002 19:17:26.489834 2020 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7" in namespace "k8s.io": not found; Skipping pod "cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)" Oct 2 19:17:26.492765 kubelet[2020]: E1002 19:17:26.492672 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:17:26.496783 env[1566]: time="2023-10-02T19:17:26.496700274Z" level=info msg="RemoveContainer for \"2d237e6da408d8899cadabe6fe1b0b1f8dc8575196f1a7f3553f40f5eed9b9b7\" returns successfully" Oct 2 19:17:26.785378 kubelet[2020]: E1002 19:17:26.785249 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:27.167142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5-rootfs.mount: Deactivated successfully. Oct 2 19:17:27.786765 kubelet[2020]: E1002 19:17:27.786716 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:28.788135 kubelet[2020]: E1002 19:17:28.788064 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:28.868659 kubelet[2020]: E1002 19:17:28.868602 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:29.398136 kubelet[2020]: W1002 19:17:29.398087 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice/cri-containerd-e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5.scope WatchSource:0}: task e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5 not found: not found Oct 2 19:17:29.789135 kubelet[2020]: E1002 19:17:29.788974 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:30.789370 kubelet[2020]: E1002 19:17:30.789301 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:31.789826 kubelet[2020]: E1002 19:17:31.789764 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:32.790503 kubelet[2020]: E1002 19:17:32.790431 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:33.670382 kubelet[2020]: E1002 19:17:33.670316 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:33.791221 kubelet[2020]: E1002 19:17:33.791153 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:33.869893 kubelet[2020]: E1002 19:17:33.869851 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:34.791373 kubelet[2020]: E1002 19:17:34.791300 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:35.792143 kubelet[2020]: E1002 19:17:35.792071 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:36.792509 kubelet[2020]: E1002 19:17:36.792463 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:37.793853 kubelet[2020]: E1002 19:17:37.793807 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:38.794647 kubelet[2020]: E1002 19:17:38.794570 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:38.871284 kubelet[2020]: E1002 19:17:38.871240 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:39.795754 kubelet[2020]: E1002 19:17:39.795686 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:40.150331 kubelet[2020]: E1002 19:17:40.149871 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:17:40.796266 kubelet[2020]: E1002 19:17:40.796218 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:41.797636 kubelet[2020]: E1002 19:17:41.797571 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:42.798513 kubelet[2020]: E1002 19:17:42.798437 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:43.799438 kubelet[2020]: E1002 19:17:43.799373 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:43.872659 kubelet[2020]: E1002 19:17:43.872624 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:44.800175 kubelet[2020]: E1002 19:17:44.800103 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:45.800592 kubelet[2020]: E1002 19:17:45.800512 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:46.801279 kubelet[2020]: E1002 19:17:46.801217 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:47.801469 kubelet[2020]: E1002 19:17:47.801422 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:48.803035 kubelet[2020]: E1002 19:17:48.802973 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:48.874187 kubelet[2020]: E1002 19:17:48.874135 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:49.803927 kubelet[2020]: E1002 19:17:49.803832 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:50.805054 kubelet[2020]: E1002 19:17:50.805008 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:51.806295 kubelet[2020]: E1002 19:17:51.806238 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:52.807621 kubelet[2020]: E1002 19:17:52.807343 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:53.149853 kubelet[2020]: E1002 19:17:53.149574 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:17:53.670036 kubelet[2020]: E1002 19:17:53.669962 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:53.808025 kubelet[2020]: E1002 19:17:53.807980 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:53.875739 kubelet[2020]: E1002 19:17:53.875704 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:54.809576 kubelet[2020]: E1002 19:17:54.809531 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:55.811043 kubelet[2020]: E1002 19:17:55.810992 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:56.812142 kubelet[2020]: E1002 19:17:56.812043 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:57.812820 kubelet[2020]: E1002 19:17:57.812769 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.814446 kubelet[2020]: E1002 19:17:58.814399 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:17:58.877869 kubelet[2020]: E1002 19:17:58.877828 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:17:59.815976 kubelet[2020]: E1002 19:17:59.815884 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:00.816344 kubelet[2020]: E1002 19:18:00.816272 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:01.816713 kubelet[2020]: E1002 19:18:01.816667 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:02.818044 kubelet[2020]: E1002 19:18:02.817976 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.818216 kubelet[2020]: E1002 19:18:03.818128 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:03.878998 kubelet[2020]: E1002 19:18:03.878966 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:04.818580 kubelet[2020]: E1002 19:18:04.818530 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:05.820124 kubelet[2020]: E1002 19:18:05.820057 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:06.150004 kubelet[2020]: E1002 19:18:06.149962 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:18:06.820694 kubelet[2020]: E1002 19:18:06.820646 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:07.821746 kubelet[2020]: E1002 19:18:07.821672 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:08.822135 kubelet[2020]: E1002 19:18:08.822059 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:08.880372 kubelet[2020]: E1002 19:18:08.880317 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:09.823273 kubelet[2020]: E1002 19:18:09.823226 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:10.824819 kubelet[2020]: E1002 19:18:10.824773 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:11.825869 kubelet[2020]: E1002 19:18:11.825821 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:12.827228 kubelet[2020]: E1002 19:18:12.827182 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.670201 kubelet[2020]: E1002 19:18:13.670130 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.828279 kubelet[2020]: E1002 19:18:13.828211 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:13.881252 kubelet[2020]: E1002 19:18:13.881219 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:14.829254 kubelet[2020]: E1002 19:18:14.829184 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:15.830207 kubelet[2020]: E1002 19:18:15.830145 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:16.830750 kubelet[2020]: E1002 19:18:16.830679 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:17.831478 kubelet[2020]: E1002 19:18:17.831402 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:18.831981 kubelet[2020]: E1002 19:18:18.831886 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:18.883104 kubelet[2020]: E1002 19:18:18.883061 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:19.832856 kubelet[2020]: E1002 19:18:19.832782 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:20.833223 kubelet[2020]: E1002 19:18:20.833153 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:21.149091 kubelet[2020]: E1002 19:18:21.149041 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:18:21.834271 kubelet[2020]: E1002 19:18:21.834223 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:22.835586 kubelet[2020]: E1002 19:18:22.835510 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:23.836304 kubelet[2020]: E1002 19:18:23.836236 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:23.884285 kubelet[2020]: E1002 19:18:23.884215 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:24.837472 kubelet[2020]: E1002 19:18:24.837423 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:25.838857 kubelet[2020]: E1002 19:18:25.838801 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:26.839021 kubelet[2020]: E1002 19:18:26.838949 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:27.840025 kubelet[2020]: E1002 19:18:27.839973 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:28.841155 kubelet[2020]: E1002 19:18:28.841083 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:28.886250 kubelet[2020]: E1002 19:18:28.886206 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:29.841612 kubelet[2020]: E1002 19:18:29.841539 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:30.841767 kubelet[2020]: E1002 19:18:30.841724 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:31.842842 kubelet[2020]: E1002 19:18:31.842770 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:32.843016 kubelet[2020]: E1002 19:18:32.842945 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.669364 kubelet[2020]: E1002 19:18:33.669299 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.843484 kubelet[2020]: E1002 19:18:33.843421 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:33.887374 kubelet[2020]: E1002 19:18:33.887342 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:34.468971 update_engine[1547]: I1002 19:18:34.468307 1547 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:18:34.468971 update_engine[1547]: I1002 19:18:34.468364 1547 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:18:34.468971 update_engine[1547]: I1002 19:18:34.468720 1547 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:18:34.469659 update_engine[1547]: I1002 19:18:34.469616 1547 omaha_request_params.cc:62] Current group set to lts Oct 2 19:18:34.470037 update_engine[1547]: I1002 19:18:34.469817 1547 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:18:34.470037 update_engine[1547]: I1002 19:18:34.469841 1547 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:18:34.470037 update_engine[1547]: I1002 19:18:34.469870 1547 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:18:34.470037 update_engine[1547]: I1002 19:18:34.469938 1547 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:18:34.471058 update_engine[1547]: I1002 19:18:34.471000 1547 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:18:34.471058 update_engine[1547]: I1002 19:18:34.471035 1547 omaha_request_action.cc:269] Request: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: Oct 2 19:18:34.471058 update_engine[1547]: I1002 19:18:34.471050 1547 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:18:34.472889 locksmithd[1585]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:18:34.474969 update_engine[1547]: I1002 19:18:34.474872 1547 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:18:34.475281 update_engine[1547]: I1002 19:18:34.475234 1547 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:18:34.844663 kubelet[2020]: E1002 19:18:34.844507 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.149226 kubelet[2020]: E1002 19:18:35.149148 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-swhzw_kube-system(1af4ab9f-e101-4cff-a4cc-854aa6d5192f)\"" pod="kube-system/cilium-swhzw" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f Oct 2 19:18:35.624663 update_engine[1547]: I1002 19:18:35.624621 1547 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:18:35.625711 update_engine[1547]: I1002 19:18:35.625677 1547 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:18:35.626164 update_engine[1547]: I1002 19:18:35.626140 1547 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:18:35.845363 kubelet[2020]: E1002 19:18:35.845287 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:35.943960 update_engine[1547]: I1002 19:18:35.943565 1547 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:18:35.946384 update_engine[1547]: I1002 19:18:35.946332 1547 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:18:35.946590 update_engine[1547]: I1002 19:18:35.946568 1547 omaha_request_action.cc:619] Omaha request response: Oct 2 19:18:35.946590 update_engine[1547]: Oct 2 19:18:35.954976 update_engine[1547]: I1002 19:18:35.954895 1547 omaha_request_action.cc:409] No update. Oct 2 19:18:35.955187 update_engine[1547]: I1002 19:18:35.955159 1547 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:18:35.955315 update_engine[1547]: I1002 19:18:35.955293 1547 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:18:35.955435 update_engine[1547]: I1002 19:18:35.955415 1547 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:18:35.955543 update_engine[1547]: I1002 19:18:35.955521 1547 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:18:35.955649 update_engine[1547]: I1002 19:18:35.955627 1547 update_attempter.cc:302] Processing Done. Oct 2 19:18:35.955757 update_engine[1547]: I1002 19:18:35.955735 1547 update_attempter.cc:338] No update. Oct 2 19:18:35.955872 update_engine[1547]: I1002 19:18:35.955847 1547 update_check_scheduler.cc:74] Next update check in 47m13s Oct 2 19:18:35.956559 locksmithd[1585]: LastCheckedTime=1696274315 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:18:36.846298 kubelet[2020]: E1002 19:18:36.846253 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:37.847294 kubelet[2020]: E1002 19:18:37.847244 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:38.848046 kubelet[2020]: E1002 19:18:38.848003 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:38.889219 kubelet[2020]: E1002 19:18:38.889179 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:39.849080 kubelet[2020]: E1002 19:18:39.849004 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:40.850155 kubelet[2020]: E1002 19:18:40.850089 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:41.850802 kubelet[2020]: E1002 19:18:41.850730 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:42.851466 kubelet[2020]: E1002 19:18:42.851418 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:43.219030 env[1566]: time="2023-10-02T19:18:43.218945279Z" level=info msg="StopPodSandbox for \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\"" Oct 2 19:18:43.219881 env[1566]: time="2023-10-02T19:18:43.219781399Z" level=info msg="Container to stop \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:18:43.222740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d-shm.mount: Deactivated successfully. Oct 2 19:18:43.243000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:18:43.244205 systemd[1]: cri-containerd-e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d.scope: Deactivated successfully. Oct 2 19:18:43.247098 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:18:43.247262 kernel: audit: type=1334 audit(1696274323.243:730): prog-id=77 op=UNLOAD Oct 2 19:18:43.250000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:18:43.255016 kernel: audit: type=1334 audit(1696274323.250:731): prog-id=80 op=UNLOAD Oct 2 19:18:43.302195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d-rootfs.mount: Deactivated successfully. Oct 2 19:18:43.323711 env[1566]: time="2023-10-02T19:18:43.323607674Z" level=info msg="shim disconnected" id=e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d Oct 2 19:18:43.323711 env[1566]: time="2023-10-02T19:18:43.323700614Z" level=warning msg="cleaning up after shim disconnected" id=e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d namespace=k8s.io Oct 2 19:18:43.324245 env[1566]: time="2023-10-02T19:18:43.323726090Z" level=info msg="cleaning up dead shim" Oct 2 19:18:43.352381 env[1566]: time="2023-10-02T19:18:43.352305739Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2751 runtime=io.containerd.runc.v2\n" Oct 2 19:18:43.353132 env[1566]: time="2023-10-02T19:18:43.353052616Z" level=info msg="TearDown network for sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" successfully" Oct 2 19:18:43.353132 env[1566]: time="2023-10-02T19:18:43.353116923Z" level=info msg="StopPodSandbox for \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" returns successfully" Oct 2 19:18:43.425223 kubelet[2020]: I1002 19:18:43.425171 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-lib-modules\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.425603 kubelet[2020]: I1002 19:18:43.425567 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-config-path\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.425834 kubelet[2020]: I1002 19:18:43.425804 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-net\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.426113 kubelet[2020]: I1002 19:18:43.426082 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hubble-tls\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.426341 kubelet[2020]: I1002 19:18:43.426309 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmlb\" (UniqueName: \"kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-kube-api-access-6wmlb\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.426578 kubelet[2020]: I1002 19:18:43.426545 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-bpf-maps\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.426800 kubelet[2020]: I1002 19:18:43.426769 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-etc-cni-netd\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.427480 kubelet[2020]: I1002 19:18:43.427410 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hostproc\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.427480 kubelet[2020]: I1002 19:18:43.427485 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-xtables-lock\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.427716 kubelet[2020]: I1002 19:18:43.427536 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-cgroup\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.427716 kubelet[2020]: I1002 19:18:43.427588 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-clustermesh-secrets\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.427716 kubelet[2020]: I1002 19:18:43.427634 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-kernel\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.427716 kubelet[2020]: I1002 19:18:43.427682 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-run\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.428081 kubelet[2020]: I1002 19:18:43.427722 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cni-path\") pod \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\" (UID: \"1af4ab9f-e101-4cff-a4cc-854aa6d5192f\") " Oct 2 19:18:43.428081 kubelet[2020]: I1002 19:18:43.427795 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cni-path" (OuterVolumeSpecName: "cni-path") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.428081 kubelet[2020]: I1002 19:18:43.425985 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.428081 kubelet[2020]: I1002 19:18:43.426942 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.428081 kubelet[2020]: W1002 19:18:43.426299 2020 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/1af4ab9f-e101-4cff-a4cc-854aa6d5192f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:18:43.435051 kubelet[2020]: I1002 19:18:43.434931 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:18:43.435051 kubelet[2020]: I1002 19:18:43.425358 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.435051 kubelet[2020]: I1002 19:18:43.427348 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.435388 kubelet[2020]: I1002 19:18:43.435123 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hostproc" (OuterVolumeSpecName: "hostproc") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.435388 kubelet[2020]: I1002 19:18:43.435201 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.435388 kubelet[2020]: I1002 19:18:43.435283 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.435670 kubelet[2020]: I1002 19:18:43.435637 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.435857 kubelet[2020]: I1002 19:18:43.435826 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:43.443731 systemd[1]: var-lib-kubelet-pods-1af4ab9f\x2de101\x2d4cff\x2da4cc\x2d854aa6d5192f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:18:43.446687 kubelet[2020]: I1002 19:18:43.446619 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:43.449508 systemd[1]: var-lib-kubelet-pods-1af4ab9f\x2de101\x2d4cff\x2da4cc\x2d854aa6d5192f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:18:43.455092 systemd[1]: var-lib-kubelet-pods-1af4ab9f\x2de101\x2d4cff\x2da4cc\x2d854aa6d5192f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6wmlb.mount: Deactivated successfully. Oct 2 19:18:43.456402 kubelet[2020]: I1002 19:18:43.456352 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:18:43.457648 kubelet[2020]: I1002 19:18:43.457595 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-kube-api-access-6wmlb" (OuterVolumeSpecName: "kube-api-access-6wmlb") pod "1af4ab9f-e101-4cff-a4cc-854aa6d5192f" (UID: "1af4ab9f-e101-4cff-a4cc-854aa6d5192f"). InnerVolumeSpecName "kube-api-access-6wmlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:43.529076 kubelet[2020]: I1002 19:18:43.528934 2020 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-lib-modules\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.529076 kubelet[2020]: I1002 19:18:43.528989 2020 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-config-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.529076 kubelet[2020]: I1002 19:18:43.529021 2020 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-net\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.529076 kubelet[2020]: I1002 19:18:43.529047 2020 reconciler.go:399] "Volume detached for volume \"kube-api-access-6wmlb\" (UniqueName: \"kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-kube-api-access-6wmlb\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531396 kubelet[2020]: I1002 19:18:43.531336 2020 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-bpf-maps\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531396 kubelet[2020]: I1002 19:18:43.531403 2020 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-etc-cni-netd\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531430 2020 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hostproc\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531458 2020 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-xtables-lock\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531483 2020 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-cgroup\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531509 2020 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-clustermesh-secrets\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531534 2020 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-hubble-tls\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531559 2020 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-host-proc-sys-kernel\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531582 2020 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cni-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.531650 kubelet[2020]: I1002 19:18:43.531607 2020 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1af4ab9f-e101-4cff-a4cc-854aa6d5192f-cilium-run\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:43.640517 kubelet[2020]: I1002 19:18:43.640462 2020 scope.go:115] "RemoveContainer" containerID="e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5" Oct 2 19:18:43.642980 env[1566]: time="2023-10-02T19:18:43.642574898Z" level=info msg="RemoveContainer for \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\"" Oct 2 19:18:43.652680 env[1566]: time="2023-10-02T19:18:43.651937437Z" level=info msg="RemoveContainer for \"e3db01dbc63fe6b1bc6a59716b0ebe72770f7908f00ab8d4a9fe1ba905daf1a5\" returns successfully" Oct 2 19:18:43.652095 systemd[1]: Removed slice kubepods-burstable-pod1af4ab9f_e101_4cff_a4cc_854aa6d5192f.slice. Oct 2 19:18:43.693455 kubelet[2020]: I1002 19:18:43.693389 2020 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:18:43.693667 kubelet[2020]: E1002 19:18:43.693473 2020 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: E1002 19:18:43.693496 2020 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: E1002 19:18:43.693514 2020 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: E1002 19:18:43.693531 2020 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: I1002 19:18:43.693567 2020 memory_manager.go:345] "RemoveStaleState removing state" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: I1002 19:18:43.693586 2020 memory_manager.go:345] "RemoveStaleState removing state" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: I1002 19:18:43.693603 2020 memory_manager.go:345] "RemoveStaleState removing state" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: E1002 19:18:43.693635 2020 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.693667 kubelet[2020]: I1002 19:18:43.693668 2020 memory_manager.go:345] "RemoveStaleState removing state" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.694302 kubelet[2020]: I1002 19:18:43.693685 2020 memory_manager.go:345] "RemoveStaleState removing state" podUID="1af4ab9f-e101-4cff-a4cc-854aa6d5192f" containerName="mount-cgroup" Oct 2 19:18:43.706134 systemd[1]: Created slice kubepods-burstable-pod4416e47c_4d4f_4b42_9021_51c0f1f8d781.slice. Oct 2 19:18:43.833482 kubelet[2020]: I1002 19:18:43.833263 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mlc5\" (UniqueName: \"kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-kube-api-access-8mlc5\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.833482 kubelet[2020]: I1002 19:18:43.833428 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-run\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.834468 kubelet[2020]: I1002 19:18:43.834344 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-cgroup\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.835047 kubelet[2020]: I1002 19:18:43.834996 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-etc-cni-netd\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.835295 kubelet[2020]: I1002 19:18:43.835274 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-xtables-lock\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.835535 kubelet[2020]: I1002 19:18:43.835493 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-net\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.835745 kubelet[2020]: I1002 19:18:43.835706 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4416e47c-4d4f-4b42-9021-51c0f1f8d781-clustermesh-secrets\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.835994 kubelet[2020]: I1002 19:18:43.835967 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-kernel\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.836258 kubelet[2020]: I1002 19:18:43.836235 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-bpf-maps\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.836471 kubelet[2020]: I1002 19:18:43.836431 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cni-path\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.836673 kubelet[2020]: I1002 19:18:43.836652 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-lib-modules\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.836869 kubelet[2020]: I1002 19:18:43.836848 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hostproc\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.837055 kubelet[2020]: I1002 19:18:43.837034 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-config-path\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.837266 kubelet[2020]: I1002 19:18:43.837228 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hubble-tls\") pod \"cilium-l29vs\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " pod="kube-system/cilium-l29vs" Oct 2 19:18:43.852412 kubelet[2020]: E1002 19:18:43.852381 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:43.890831 kubelet[2020]: E1002 19:18:43.890778 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:44.017186 env[1566]: time="2023-10-02T19:18:44.016676250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l29vs,Uid:4416e47c-4d4f-4b42-9021-51c0f1f8d781,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:44.050086 env[1566]: time="2023-10-02T19:18:44.049890236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:44.050086 env[1566]: time="2023-10-02T19:18:44.050021900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:44.050474 env[1566]: time="2023-10-02T19:18:44.050051888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:44.051044 env[1566]: time="2023-10-02T19:18:44.050931544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56 pid=2777 runtime=io.containerd.runc.v2 Oct 2 19:18:44.084155 systemd[1]: Started cri-containerd-5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56.scope. Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131274 kernel: audit: type=1400 audit(1696274324.115:732): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131407 kernel: audit: type=1400 audit(1696274324.115:733): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.143169 kernel: audit: type=1400 audit(1696274324.115:734): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.151655 kernel: audit: type=1400 audit(1696274324.115:735): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.151794 kernel: audit: type=1400 audit(1696274324.115:736): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.155353 kubelet[2020]: I1002 19:18:44.155318 2020 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1af4ab9f-e101-4cff-a4cc-854aa6d5192f path="/var/lib/kubelet/pods/1af4ab9f-e101-4cff-a4cc-854aa6d5192f/volumes" Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.167119 kernel: audit: type=1400 audit(1696274324.115:737): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.175187 kernel: audit: type=1400 audit(1696274324.115:738): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.183926 kernel: audit: type=1400 audit(1696274324.115:739): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.122000 audit: BPF prog-id=84 op=LOAD Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2777 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336365313038363537633036333235646436623533623966303437 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2777 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336365313038363537633036333235646436623533623966303437 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit: BPF prog-id=85 op=LOAD Oct 2 19:18:44.130000 audit[2786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2777 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336365313038363537633036333235646436623533623966303437 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.130000 audit: BPF prog-id=86 op=LOAD Oct 2 19:18:44.130000 audit[2786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2777 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336365313038363537633036333235646436623533623966303437 Oct 2 19:18:44.131000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:18:44.131000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { perfmon } for pid=2786 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit[2786]: AVC avc: denied { bpf } for pid=2786 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:44.131000 audit: BPF prog-id=87 op=LOAD Oct 2 19:18:44.131000 audit[2786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2777 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:44.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563336365313038363537633036333235646436623533623966303437 Oct 2 19:18:44.206859 env[1566]: time="2023-10-02T19:18:44.206755306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l29vs,Uid:4416e47c-4d4f-4b42-9021-51c0f1f8d781,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\"" Oct 2 19:18:44.213692 env[1566]: time="2023-10-02T19:18:44.213591305Z" level=info msg="CreateContainer within sandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:18:44.250047 env[1566]: time="2023-10-02T19:18:44.249955866Z" level=info msg="CreateContainer within sandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\"" Oct 2 19:18:44.251097 env[1566]: time="2023-10-02T19:18:44.250870946Z" level=info msg="StartContainer for \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\"" Oct 2 19:18:44.305689 systemd[1]: Started cri-containerd-9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994.scope. Oct 2 19:18:44.339644 systemd[1]: cri-containerd-9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994.scope: Deactivated successfully. Oct 2 19:18:44.372734 env[1566]: time="2023-10-02T19:18:44.372643462Z" level=info msg="shim disconnected" id=9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994 Oct 2 19:18:44.373081 env[1566]: time="2023-10-02T19:18:44.372733413Z" level=warning msg="cleaning up after shim disconnected" id=9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994 namespace=k8s.io Oct 2 19:18:44.373081 env[1566]: time="2023-10-02T19:18:44.372757425Z" level=info msg="cleaning up dead shim" Oct 2 19:18:44.401923 env[1566]: time="2023-10-02T19:18:44.401805473Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2837 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:44.402474 env[1566]: time="2023-10-02T19:18:44.402346371Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 19:18:44.402993 env[1566]: time="2023-10-02T19:18:44.402881892Z" level=error msg="Failed to pipe stdout of container \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\"" error="reading from a closed fifo" Oct 2 19:18:44.403181 env[1566]: time="2023-10-02T19:18:44.403058376Z" level=error msg="Failed to pipe stderr of container \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\"" error="reading from a closed fifo" Oct 2 19:18:44.408927 env[1566]: time="2023-10-02T19:18:44.408812099Z" level=error msg="StartContainer for \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:44.409446 kubelet[2020]: E1002 19:18:44.409364 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994" Oct 2 19:18:44.410893 kubelet[2020]: E1002 19:18:44.410033 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:44.410893 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:44.410893 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:18:44.410893 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8mlc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-l29vs_kube-system(4416e47c-4d4f-4b42-9021-51c0f1f8d781): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:44.412271 kubelet[2020]: E1002 19:18:44.410595 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-l29vs" podUID=4416e47c-4d4f-4b42-9021-51c0f1f8d781 Oct 2 19:18:44.648317 env[1566]: time="2023-10-02T19:18:44.648168272Z" level=info msg="StopPodSandbox for \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\"" Oct 2 19:18:44.649248 env[1566]: time="2023-10-02T19:18:44.649009121Z" level=info msg="Container to stop \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:18:44.666882 systemd[1]: cri-containerd-5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56.scope: Deactivated successfully. Oct 2 19:18:44.666000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:18:44.669000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:18:44.729293 env[1566]: time="2023-10-02T19:18:44.729212038Z" level=info msg="shim disconnected" id=5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56 Oct 2 19:18:44.729293 env[1566]: time="2023-10-02T19:18:44.729288286Z" level=warning msg="cleaning up after shim disconnected" id=5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56 namespace=k8s.io Oct 2 19:18:44.729628 env[1566]: time="2023-10-02T19:18:44.729312094Z" level=info msg="cleaning up dead shim" Oct 2 19:18:44.757348 env[1566]: time="2023-10-02T19:18:44.757273090Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2870 runtime=io.containerd.runc.v2\n" Oct 2 19:18:44.757927 env[1566]: time="2023-10-02T19:18:44.757832444Z" level=info msg="TearDown network for sandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" successfully" Oct 2 19:18:44.757927 env[1566]: time="2023-10-02T19:18:44.757894664Z" level=info msg="StopPodSandbox for \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" returns successfully" Oct 2 19:18:44.843970 kubelet[2020]: I1002 19:18:44.843198 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.843970 kubelet[2020]: I1002 19:18:44.843286 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-kernel\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.843970 kubelet[2020]: W1002 19:18:44.843669 2020 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4416e47c-4d4f-4b42-9021-51c0f1f8d781/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:18:44.845604 kubelet[2020]: I1002 19:18:44.844340 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-config-path\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.845604 kubelet[2020]: I1002 19:18:44.844408 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cni-path\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.845604 kubelet[2020]: I1002 19:18:44.844447 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hostproc\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.845604 kubelet[2020]: I1002 19:18:44.844488 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-cgroup\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.845604 kubelet[2020]: I1002 19:18:44.844529 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-etc-cni-netd\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.845604 kubelet[2020]: I1002 19:18:44.844571 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-net\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846080 kubelet[2020]: I1002 19:18:44.844616 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4416e47c-4d4f-4b42-9021-51c0f1f8d781-clustermesh-secrets\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846080 kubelet[2020]: I1002 19:18:44.844658 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hubble-tls\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846080 kubelet[2020]: I1002 19:18:44.844700 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mlc5\" (UniqueName: \"kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-kube-api-access-8mlc5\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846080 kubelet[2020]: I1002 19:18:44.844760 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-run\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846080 kubelet[2020]: I1002 19:18:44.844803 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-bpf-maps\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846080 kubelet[2020]: I1002 19:18:44.844842 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-xtables-lock\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846437 kubelet[2020]: I1002 19:18:44.844881 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-lib-modules\") pod \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\" (UID: \"4416e47c-4d4f-4b42-9021-51c0f1f8d781\") " Oct 2 19:18:44.846437 kubelet[2020]: I1002 19:18:44.844945 2020 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-kernel\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.846437 kubelet[2020]: I1002 19:18:44.844989 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.846437 kubelet[2020]: I1002 19:18:44.845035 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cni-path" (OuterVolumeSpecName: "cni-path") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.846437 kubelet[2020]: I1002 19:18:44.845074 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hostproc" (OuterVolumeSpecName: "hostproc") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.846743 kubelet[2020]: I1002 19:18:44.845112 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.846743 kubelet[2020]: I1002 19:18:44.845152 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.846743 kubelet[2020]: I1002 19:18:44.845191 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.848550 kubelet[2020]: I1002 19:18:44.848483 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:18:44.848883 kubelet[2020]: I1002 19:18:44.848848 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.849697 kubelet[2020]: I1002 19:18:44.849397 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.849883 kubelet[2020]: I1002 19:18:44.849431 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:18:44.853704 kubelet[2020]: E1002 19:18:44.853661 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:44.858102 kubelet[2020]: I1002 19:18:44.858030 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4416e47c-4d4f-4b42-9021-51c0f1f8d781-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:18:44.863098 kubelet[2020]: I1002 19:18:44.863047 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:44.864757 kubelet[2020]: I1002 19:18:44.864693 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-kube-api-access-8mlc5" (OuterVolumeSpecName: "kube-api-access-8mlc5") pod "4416e47c-4d4f-4b42-9021-51c0f1f8d781" (UID: "4416e47c-4d4f-4b42-9021-51c0f1f8d781"). InnerVolumeSpecName "kube-api-access-8mlc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946119 2020 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cni-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946180 2020 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hostproc\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946213 2020 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-cgroup\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946239 2020 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-etc-cni-netd\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946264 2020 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-host-proc-sys-net\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946288 2020 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4416e47c-4d4f-4b42-9021-51c0f1f8d781-clustermesh-secrets\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946316 2020 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-hubble-tls\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.946959 kubelet[2020]: I1002 19:18:44.946344 2020 reconciler.go:399] "Volume detached for volume \"kube-api-access-8mlc5\" (UniqueName: \"kubernetes.io/projected/4416e47c-4d4f-4b42-9021-51c0f1f8d781-kube-api-access-8mlc5\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.948731 kubelet[2020]: I1002 19:18:44.946367 2020 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-run\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.948731 kubelet[2020]: I1002 19:18:44.946389 2020 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-bpf-maps\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.948731 kubelet[2020]: I1002 19:18:44.946411 2020 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-xtables-lock\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.948731 kubelet[2020]: I1002 19:18:44.946435 2020 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4416e47c-4d4f-4b42-9021-51c0f1f8d781-lib-modules\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:44.948731 kubelet[2020]: I1002 19:18:44.946460 2020 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4416e47c-4d4f-4b42-9021-51c0f1f8d781-cilium-config-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:18:45.222623 systemd[1]: run-containerd-runc-k8s.io-9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994-runc.yBWZPL.mount: Deactivated successfully. Oct 2 19:18:45.222795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994-rootfs.mount: Deactivated successfully. Oct 2 19:18:45.222945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56-rootfs.mount: Deactivated successfully. Oct 2 19:18:45.223080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56-shm.mount: Deactivated successfully. Oct 2 19:18:45.223212 systemd[1]: var-lib-kubelet-pods-4416e47c\x2d4d4f\x2d4b42\x2d9021\x2d51c0f1f8d781-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8mlc5.mount: Deactivated successfully. Oct 2 19:18:45.223357 systemd[1]: var-lib-kubelet-pods-4416e47c\x2d4d4f\x2d4b42\x2d9021\x2d51c0f1f8d781-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:18:45.223484 systemd[1]: var-lib-kubelet-pods-4416e47c\x2d4d4f\x2d4b42\x2d9021\x2d51c0f1f8d781-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:18:45.650893 kubelet[2020]: I1002 19:18:45.650861 2020 scope.go:115] "RemoveContainer" containerID="9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994" Oct 2 19:18:45.653504 env[1566]: time="2023-10-02T19:18:45.653443320Z" level=info msg="RemoveContainer for \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\"" Oct 2 19:18:45.660035 systemd[1]: Removed slice kubepods-burstable-pod4416e47c_4d4f_4b42_9021_51c0f1f8d781.slice. Oct 2 19:18:45.662199 env[1566]: time="2023-10-02T19:18:45.662136504Z" level=info msg="RemoveContainer for \"9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994\" returns successfully" Oct 2 19:18:45.855281 kubelet[2020]: E1002 19:18:45.855203 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:46.151975 kubelet[2020]: I1002 19:18:46.151891 2020 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4416e47c-4d4f-4b42-9021-51c0f1f8d781 path="/var/lib/kubelet/pods/4416e47c-4d4f-4b42-9021-51c0f1f8d781/volumes" Oct 2 19:18:46.855693 kubelet[2020]: E1002 19:18:46.855644 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:47.480006 kubelet[2020]: W1002 19:18:47.479955 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4416e47c_4d4f_4b42_9021_51c0f1f8d781.slice/cri-containerd-9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994.scope WatchSource:0}: container "9108233ffd0db0ab25dc4a3f480e914e28f6cb1743370dbfd6bb606e62260994" in namespace "k8s.io": not found Oct 2 19:18:47.856931 kubelet[2020]: E1002 19:18:47.856778 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.700573 kubelet[2020]: I1002 19:18:48.700481 2020 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:18:48.700824 kubelet[2020]: E1002 19:18:48.700590 2020 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4416e47c-4d4f-4b42-9021-51c0f1f8d781" containerName="mount-cgroup" Oct 2 19:18:48.700824 kubelet[2020]: I1002 19:18:48.700632 2020 memory_manager.go:345] "RemoveStaleState removing state" podUID="4416e47c-4d4f-4b42-9021-51c0f1f8d781" containerName="mount-cgroup" Oct 2 19:18:48.711738 systemd[1]: Created slice kubepods-besteffort-pod9c85b917_4d6b_43e2_a667_246e729fc023.slice. Oct 2 19:18:48.738063 kubelet[2020]: I1002 19:18:48.738000 2020 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:18:48.750532 systemd[1]: Created slice kubepods-burstable-pod8dffdf61_0425_4cf2_8b39_ee68b2581a4a.slice. Oct 2 19:18:48.766207 kubelet[2020]: I1002 19:18:48.766153 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpjlq\" (UniqueName: \"kubernetes.io/projected/9c85b917-4d6b-43e2-a667-246e729fc023-kube-api-access-fpjlq\") pod \"cilium-operator-69b677f97c-phh5w\" (UID: \"9c85b917-4d6b-43e2-a667-246e729fc023\") " pod="kube-system/cilium-operator-69b677f97c-phh5w" Oct 2 19:18:48.766431 kubelet[2020]: I1002 19:18:48.766239 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c85b917-4d6b-43e2-a667-246e729fc023-cilium-config-path\") pod \"cilium-operator-69b677f97c-phh5w\" (UID: \"9c85b917-4d6b-43e2-a667-246e729fc023\") " pod="kube-system/cilium-operator-69b677f97c-phh5w" Oct 2 19:18:48.858054 kubelet[2020]: E1002 19:18:48.857993 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:48.867247 kubelet[2020]: I1002 19:18:48.867198 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-run\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867414 kubelet[2020]: I1002 19:18:48.867268 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-etc-cni-netd\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867414 kubelet[2020]: I1002 19:18:48.867330 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-lib-modules\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867414 kubelet[2020]: I1002 19:18:48.867374 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-xtables-lock\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867605 kubelet[2020]: I1002 19:18:48.867418 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-net\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867605 kubelet[2020]: I1002 19:18:48.867460 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hubble-tls\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867605 kubelet[2020]: I1002 19:18:48.867503 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmbww\" (UniqueName: \"kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-kube-api-access-fmbww\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867605 kubelet[2020]: I1002 19:18:48.867546 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-bpf-maps\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867605 kubelet[2020]: I1002 19:18:48.867587 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cni-path\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867889 kubelet[2020]: I1002 19:18:48.867641 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-clustermesh-secrets\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867889 kubelet[2020]: I1002 19:18:48.867689 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-kernel\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867889 kubelet[2020]: I1002 19:18:48.867742 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hostproc\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867889 kubelet[2020]: I1002 19:18:48.867789 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-cgroup\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867889 kubelet[2020]: I1002 19:18:48.867831 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-config-path\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.867889 kubelet[2020]: I1002 19:18:48.867873 2020 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-ipsec-secrets\") pod \"cilium-t7627\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " pod="kube-system/cilium-t7627" Oct 2 19:18:48.896992 kubelet[2020]: E1002 19:18:48.896947 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:49.023266 env[1566]: time="2023-10-02T19:18:49.021278374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-phh5w,Uid:9c85b917-4d6b-43e2-a667-246e729fc023,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:49.058344 env[1566]: time="2023-10-02T19:18:49.058194841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:49.058496 env[1566]: time="2023-10-02T19:18:49.058366837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:49.058496 env[1566]: time="2023-10-02T19:18:49.058453428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:49.059355 env[1566]: time="2023-10-02T19:18:49.059059822Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c pid=2898 runtime=io.containerd.runc.v2 Oct 2 19:18:49.065789 env[1566]: time="2023-10-02T19:18:49.065695475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7627,Uid:8dffdf61-0425-4cf2-8b39-ee68b2581a4a,Namespace:kube-system,Attempt:0,}" Oct 2 19:18:49.100872 systemd[1]: Started cri-containerd-189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c.scope. Oct 2 19:18:49.120271 env[1566]: time="2023-10-02T19:18:49.119619340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:18:49.120271 env[1566]: time="2023-10-02T19:18:49.119766711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:18:49.120271 env[1566]: time="2023-10-02T19:18:49.119825979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:18:49.126066 env[1566]: time="2023-10-02T19:18:49.121112158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87 pid=2924 runtime=io.containerd.runc.v2 Oct 2 19:18:49.156197 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 19:18:49.156388 kernel: audit: type=1400 audit(1696274329.147:752): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.166767 kernel: audit: type=1400 audit(1696274329.147:753): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175564 kernel: audit: type=1400 audit(1696274329.147:754): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.184660 kernel: audit: type=1400 audit(1696274329.147:755): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.194847 kernel: audit: type=1400 audit(1696274329.147:756): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.205184 systemd[1]: Started cri-containerd-1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87.scope. Oct 2 19:18:49.213967 kernel: audit: type=1400 audit(1696274329.147:757): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.223862 kernel: audit: type=1400 audit(1696274329.147:758): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.234164 kernel: audit: type=1400 audit(1696274329.147:759): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.147000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.242296 kernel: audit: type=1400 audit(1696274329.147:760): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.150000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.251459 kernel: audit: type=1400 audit(1696274329.150:761): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.150000 audit: BPF prog-id=88 op=LOAD Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2898 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:49.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138393530363430316261653432633938336663623634633233393036 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2898 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:49.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138393530363430316261653432633938336663623634633233393036 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.158000 audit: BPF prog-id=89 op=LOAD Oct 2 19:18:49.158000 audit[2908]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2898 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:49.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138393530363430316261653432633938336663623634633233393036 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.168000 audit: BPF prog-id=90 op=LOAD Oct 2 19:18:49.168000 audit[2908]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2898 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:49.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138393530363430316261653432633938336663623634633233393036 Oct 2 19:18:49.174000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:18:49.175000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { perfmon } for pid=2908 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit[2908]: AVC avc: denied { bpf } for pid=2908 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.175000 audit: BPF prog-id=91 op=LOAD Oct 2 19:18:49.175000 audit[2908]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2898 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:49.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138393530363430316261653432633938336663623634633233393036 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.253000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.253000 audit: BPF prog-id=92 op=LOAD Oct 2 19:18:49.254000 audit[2933]: AVC avc: denied { bpf } for pid=2933 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:49.254000 audit[2933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2924 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:49.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363861376363376536336536363266613366623463613139336230 Oct 2 19:18:49.302973 env[1566]: time="2023-10-02T19:18:49.298635941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7627,Uid:8dffdf61-0425-4cf2-8b39-ee68b2581a4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\"" Oct 2 19:18:49.307828 env[1566]: time="2023-10-02T19:18:49.307705822Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:18:49.324185 env[1566]: time="2023-10-02T19:18:49.324069985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-phh5w,Uid:9c85b917-4d6b-43e2-a667-246e729fc023,Namespace:kube-system,Attempt:0,} returns sandbox id \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\"" Oct 2 19:18:49.327617 env[1566]: time="2023-10-02T19:18:49.327497869Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:18:49.340573 env[1566]: time="2023-10-02T19:18:49.340506975Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\"" Oct 2 19:18:49.341937 env[1566]: time="2023-10-02T19:18:49.341839151Z" level=info msg="StartContainer for \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\"" Oct 2 19:18:49.388254 systemd[1]: Started cri-containerd-bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9.scope. Oct 2 19:18:49.427838 systemd[1]: cri-containerd-bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9.scope: Deactivated successfully. Oct 2 19:18:49.460806 env[1566]: time="2023-10-02T19:18:49.460726817Z" level=info msg="shim disconnected" id=bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9 Oct 2 19:18:49.461223 env[1566]: time="2023-10-02T19:18:49.460810481Z" level=warning msg="cleaning up after shim disconnected" id=bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9 namespace=k8s.io Oct 2 19:18:49.461223 env[1566]: time="2023-10-02T19:18:49.460834181Z" level=info msg="cleaning up dead shim" Oct 2 19:18:49.490396 env[1566]: time="2023-10-02T19:18:49.490315690Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2994 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:49.490993 env[1566]: time="2023-10-02T19:18:49.490846989Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:18:49.492113 env[1566]: time="2023-10-02T19:18:49.492033448Z" level=error msg="Failed to pipe stdout of container \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\"" error="reading from a closed fifo" Oct 2 19:18:49.495162 env[1566]: time="2023-10-02T19:18:49.495059442Z" level=error msg="Failed to pipe stderr of container \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\"" error="reading from a closed fifo" Oct 2 19:18:49.498080 env[1566]: time="2023-10-02T19:18:49.497952812Z" level=error msg="StartContainer for \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:49.498479 kubelet[2020]: E1002 19:18:49.498404 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9" Oct 2 19:18:49.498648 kubelet[2020]: E1002 19:18:49.498627 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:49.498648 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:49.498648 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:18:49.498648 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fmbww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:49.499047 kubelet[2020]: E1002 19:18:49.498717 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:18:49.669989 env[1566]: time="2023-10-02T19:18:49.669928210Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:18:49.696263 env[1566]: time="2023-10-02T19:18:49.696154995Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\"" Oct 2 19:18:49.697674 env[1566]: time="2023-10-02T19:18:49.697610613Z" level=info msg="StartContainer for \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\"" Oct 2 19:18:49.741702 systemd[1]: Started cri-containerd-85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745.scope. Oct 2 19:18:49.779191 systemd[1]: cri-containerd-85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745.scope: Deactivated successfully. Oct 2 19:18:49.800059 env[1566]: time="2023-10-02T19:18:49.799968301Z" level=info msg="shim disconnected" id=85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745 Oct 2 19:18:49.800402 env[1566]: time="2023-10-02T19:18:49.800370096Z" level=warning msg="cleaning up after shim disconnected" id=85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745 namespace=k8s.io Oct 2 19:18:49.800542 env[1566]: time="2023-10-02T19:18:49.800514336Z" level=info msg="cleaning up dead shim" Oct 2 19:18:49.826707 env[1566]: time="2023-10-02T19:18:49.826643213Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:18:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3032 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:18:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:18:49.827481 env[1566]: time="2023-10-02T19:18:49.827388386Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:18:49.827939 env[1566]: time="2023-10-02T19:18:49.827855460Z" level=error msg="Failed to pipe stdout of container \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\"" error="reading from a closed fifo" Oct 2 19:18:49.828168 env[1566]: time="2023-10-02T19:18:49.828111408Z" level=error msg="Failed to pipe stderr of container \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\"" error="reading from a closed fifo" Oct 2 19:18:49.831003 env[1566]: time="2023-10-02T19:18:49.830932442Z" level=error msg="StartContainer for \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:18:49.831466 kubelet[2020]: E1002 19:18:49.831431 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745" Oct 2 19:18:49.831716 kubelet[2020]: E1002 19:18:49.831573 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:18:49.831716 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:18:49.831716 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:18:49.831716 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fmbww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:18:49.832078 kubelet[2020]: E1002 19:18:49.831630 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:18:49.859196 kubelet[2020]: E1002 19:18:49.859122 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:50.612985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471487697.mount: Deactivated successfully. Oct 2 19:18:50.672272 kubelet[2020]: I1002 19:18:50.672230 2020 scope.go:115] "RemoveContainer" containerID="bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9" Oct 2 19:18:50.672832 kubelet[2020]: I1002 19:18:50.672800 2020 scope.go:115] "RemoveContainer" containerID="bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9" Oct 2 19:18:50.676277 env[1566]: time="2023-10-02T19:18:50.676200984Z" level=info msg="RemoveContainer for \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\"" Oct 2 19:18:50.677959 env[1566]: time="2023-10-02T19:18:50.676295316Z" level=info msg="RemoveContainer for \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\"" Oct 2 19:18:50.677959 env[1566]: time="2023-10-02T19:18:50.677872039Z" level=error msg="RemoveContainer for \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\" failed" error="failed to set removing state for container \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\": container is already in removing state" Oct 2 19:18:50.678678 kubelet[2020]: E1002 19:18:50.678234 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\": container is already in removing state" containerID="bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9" Oct 2 19:18:50.678678 kubelet[2020]: E1002 19:18:50.678317 2020 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9": container is already in removing state; Skipping pod "cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)" Oct 2 19:18:50.679539 kubelet[2020]: E1002 19:18:50.679489 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:18:50.714691 env[1566]: time="2023-10-02T19:18:50.714606388Z" level=info msg="RemoveContainer for \"bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9\" returns successfully" Oct 2 19:18:50.859713 kubelet[2020]: E1002 19:18:50.859672 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.598924 env[1566]: time="2023-10-02T19:18:51.598823632Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:51.601636 env[1566]: time="2023-10-02T19:18:51.601549052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:51.604660 env[1566]: time="2023-10-02T19:18:51.604596538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:18:51.606176 env[1566]: time="2023-10-02T19:18:51.606121241Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:18:51.612224 env[1566]: time="2023-10-02T19:18:51.612170374Z" level=info msg="CreateContainer within sandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:18:51.633036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180509350.mount: Deactivated successfully. Oct 2 19:18:51.643251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108827297.mount: Deactivated successfully. Oct 2 19:18:51.649558 env[1566]: time="2023-10-02T19:18:51.649493699Z" level=info msg="CreateContainer within sandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\"" Oct 2 19:18:51.650853 env[1566]: time="2023-10-02T19:18:51.650771419Z" level=info msg="StartContainer for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\"" Oct 2 19:18:51.678753 kubelet[2020]: E1002 19:18:51.678674 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:18:51.702087 systemd[1]: Started cri-containerd-dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc.scope. Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.746000 audit: BPF prog-id=96 op=LOAD Oct 2 19:18:51.747000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.747000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2898 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464653037313061643633636639643239333636656436356631656561 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2898 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464653037313061643633636639643239333636656436356631656561 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.748000 audit: BPF prog-id=97 op=LOAD Oct 2 19:18:51.748000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2898 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464653037313061643633636639643239333636656436356631656561 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.750000 audit: BPF prog-id=98 op=LOAD Oct 2 19:18:51.750000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2898 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464653037313061643633636639643239333636656436356631656561 Oct 2 19:18:51.751000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:18:51.751000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { perfmon } for pid=3051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit[3051]: AVC avc: denied { bpf } for pid=3051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:18:51.751000 audit: BPF prog-id=99 op=LOAD Oct 2 19:18:51.751000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2898 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:18:51.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464653037313061643633636639643239333636656436356631656561 Oct 2 19:18:51.789034 env[1566]: time="2023-10-02T19:18:51.788970648Z" level=info msg="StartContainer for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" returns successfully" Oct 2 19:18:51.861328 kubelet[2020]: E1002 19:18:51.861179 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:51.870000 audit[3061]: AVC avc: denied { map_create } for pid=3061 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c399,c934 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c399,c934 tclass=bpf permissive=0 Oct 2 19:18:51.870000 audit[3061]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=4000687768 a2=48 a3=0 items=0 ppid=2898 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c399,c934 key=(null) Oct 2 19:18:51.870000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:18:52.572681 kubelet[2020]: W1002 19:18:52.572500 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dffdf61_0425_4cf2_8b39_ee68b2581a4a.slice/cri-containerd-bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9.scope WatchSource:0}: container "bc6fd02d7d979bfba924818b95a2410c65e55e3695acad2f24cc2bd485a5a5c9" in namespace "k8s.io": not found Oct 2 19:18:52.861818 kubelet[2020]: E1002 19:18:52.861658 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.670306 kubelet[2020]: E1002 19:18:53.670261 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.862183 kubelet[2020]: E1002 19:18:53.862136 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:53.898455 kubelet[2020]: E1002 19:18:53.898401 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:54.863709 kubelet[2020]: E1002 19:18:54.863640 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:55.682817 kubelet[2020]: W1002 19:18:55.682761 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dffdf61_0425_4cf2_8b39_ee68b2581a4a.slice/cri-containerd-85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745.scope WatchSource:0}: task 85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745 not found: not found Oct 2 19:18:55.864258 kubelet[2020]: E1002 19:18:55.864181 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:56.865390 kubelet[2020]: E1002 19:18:56.865321 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:57.865847 kubelet[2020]: E1002 19:18:57.865773 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.866934 kubelet[2020]: E1002 19:18:58.866865 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:18:58.900069 kubelet[2020]: E1002 19:18:58.900012 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:18:59.868478 kubelet[2020]: E1002 19:18:59.868429 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:00.869369 kubelet[2020]: E1002 19:19:00.869324 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:01.870949 kubelet[2020]: E1002 19:19:01.870885 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:02.871979 kubelet[2020]: E1002 19:19:02.871893 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.873591 kubelet[2020]: E1002 19:19:03.873519 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:03.901949 kubelet[2020]: E1002 19:19:03.901882 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:04.874506 kubelet[2020]: E1002 19:19:04.874427 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:05.153415 env[1566]: time="2023-10-02T19:19:05.153038241Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:19:05.173299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888528891.mount: Deactivated successfully. Oct 2 19:19:05.185193 env[1566]: time="2023-10-02T19:19:05.185109974Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\"" Oct 2 19:19:05.187454 env[1566]: time="2023-10-02T19:19:05.185997949Z" level=info msg="StartContainer for \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\"" Oct 2 19:19:05.233840 systemd[1]: Started cri-containerd-0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f.scope. Oct 2 19:19:05.279255 systemd[1]: cri-containerd-0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f.scope: Deactivated successfully. Oct 2 19:19:05.545129 env[1566]: time="2023-10-02T19:19:05.544889348Z" level=info msg="shim disconnected" id=0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f Oct 2 19:19:05.545129 env[1566]: time="2023-10-02T19:19:05.545017448Z" level=warning msg="cleaning up after shim disconnected" id=0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f namespace=k8s.io Oct 2 19:19:05.545129 env[1566]: time="2023-10-02T19:19:05.545040236Z" level=info msg="cleaning up dead shim" Oct 2 19:19:05.571117 env[1566]: time="2023-10-02T19:19:05.571037542Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3105 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:05.572008 env[1566]: time="2023-10-02T19:19:05.571829277Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:19:05.572510 env[1566]: time="2023-10-02T19:19:05.572457752Z" level=error msg="Failed to pipe stdout of container \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\"" error="reading from a closed fifo" Oct 2 19:19:05.572690 env[1566]: time="2023-10-02T19:19:05.572456264Z" level=error msg="Failed to pipe stderr of container \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\"" error="reading from a closed fifo" Oct 2 19:19:05.575230 env[1566]: time="2023-10-02T19:19:05.575155396Z" level=error msg="StartContainer for \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:05.575492 kubelet[2020]: E1002 19:19:05.575458 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f" Oct 2 19:19:05.575649 kubelet[2020]: E1002 19:19:05.575601 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:05.575649 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:05.575649 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:19:05.575649 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fmbww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:05.575970 kubelet[2020]: E1002 19:19:05.575680 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:19:05.721689 kubelet[2020]: I1002 19:19:05.721622 2020 scope.go:115] "RemoveContainer" containerID="85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745" Oct 2 19:19:05.722453 kubelet[2020]: I1002 19:19:05.722414 2020 scope.go:115] "RemoveContainer" containerID="85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745" Oct 2 19:19:05.725189 env[1566]: time="2023-10-02T19:19:05.725135302Z" level=info msg="RemoveContainer for \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\"" Oct 2 19:19:05.726163 env[1566]: time="2023-10-02T19:19:05.726080229Z" level=info msg="RemoveContainer for \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\"" Oct 2 19:19:05.726375 env[1566]: time="2023-10-02T19:19:05.726289809Z" level=error msg="RemoveContainer for \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\" failed" error="failed to set removing state for container \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\": container is already in removing state" Oct 2 19:19:05.726704 kubelet[2020]: E1002 19:19:05.726658 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\": container is already in removing state" containerID="85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745" Oct 2 19:19:05.726841 kubelet[2020]: E1002 19:19:05.726753 2020 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745": container is already in removing state; Skipping pod "cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)" Oct 2 19:19:05.727445 kubelet[2020]: E1002 19:19:05.727394 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:19:05.734443 env[1566]: time="2023-10-02T19:19:05.734380606Z" level=info msg="RemoveContainer for \"85f93522990e15f00f8039830593fda608dcb34e2236a4e452f7a2a6844cc745\" returns successfully" Oct 2 19:19:05.874740 kubelet[2020]: E1002 19:19:05.874614 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:06.167130 systemd[1]: run-containerd-runc-k8s.io-0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f-runc.lDgFqQ.mount: Deactivated successfully. Oct 2 19:19:06.167308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f-rootfs.mount: Deactivated successfully. Oct 2 19:19:06.875680 kubelet[2020]: E1002 19:19:06.875633 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:07.876523 kubelet[2020]: E1002 19:19:07.876467 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:08.652981 kubelet[2020]: W1002 19:19:08.652877 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dffdf61_0425_4cf2_8b39_ee68b2581a4a.slice/cri-containerd-0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f.scope WatchSource:0}: task 0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f not found: not found Oct 2 19:19:08.877847 kubelet[2020]: E1002 19:19:08.877778 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:08.902869 kubelet[2020]: E1002 19:19:08.902813 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:09.878514 kubelet[2020]: E1002 19:19:09.878452 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:10.878869 kubelet[2020]: E1002 19:19:10.878802 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:11.879722 kubelet[2020]: E1002 19:19:11.879658 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:12.880472 kubelet[2020]: E1002 19:19:12.880407 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.669729 kubelet[2020]: E1002 19:19:13.669626 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.706667 env[1566]: time="2023-10-02T19:19:13.706607806Z" level=info msg="StopPodSandbox for \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\"" Oct 2 19:19:13.707544 env[1566]: time="2023-10-02T19:19:13.707445274Z" level=info msg="TearDown network for sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" successfully" Oct 2 19:19:13.707769 env[1566]: time="2023-10-02T19:19:13.707721022Z" level=info msg="StopPodSandbox for \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" returns successfully" Oct 2 19:19:13.708878 env[1566]: time="2023-10-02T19:19:13.708770098Z" level=info msg="RemovePodSandbox for \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\"" Oct 2 19:19:13.709067 env[1566]: time="2023-10-02T19:19:13.708871618Z" level=info msg="Forcibly stopping sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\"" Oct 2 19:19:13.709169 env[1566]: time="2023-10-02T19:19:13.709133553Z" level=info msg="TearDown network for sandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" successfully" Oct 2 19:19:13.714961 env[1566]: time="2023-10-02T19:19:13.714863623Z" level=info msg="RemovePodSandbox \"e4ac73e9e26cee9ab50a439cc6e66fd2db28c3015bf930e4f29c46b66778110d\" returns successfully" Oct 2 19:19:13.715742 env[1566]: time="2023-10-02T19:19:13.715697335Z" level=info msg="StopPodSandbox for \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\"" Oct 2 19:19:13.716116 env[1566]: time="2023-10-02T19:19:13.716047470Z" level=info msg="TearDown network for sandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" successfully" Oct 2 19:19:13.716261 env[1566]: time="2023-10-02T19:19:13.716227710Z" level=info msg="StopPodSandbox for \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" returns successfully" Oct 2 19:19:13.717083 env[1566]: time="2023-10-02T19:19:13.717008382Z" level=info msg="RemovePodSandbox for \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\"" Oct 2 19:19:13.717240 env[1566]: time="2023-10-02T19:19:13.717089790Z" level=info msg="Forcibly stopping sandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\"" Oct 2 19:19:13.717345 env[1566]: time="2023-10-02T19:19:13.717280002Z" level=info msg="TearDown network for sandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" successfully" Oct 2 19:19:13.722372 env[1566]: time="2023-10-02T19:19:13.722292784Z" level=info msg="RemovePodSandbox \"5c3ce108657c06325dd6b53b9f0473c157ce2b03d50cf47f2c7da12e0419dc56\" returns successfully" Oct 2 19:19:13.881326 kubelet[2020]: E1002 19:19:13.881291 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:13.903738 kubelet[2020]: E1002 19:19:13.903683 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:14.882100 kubelet[2020]: E1002 19:19:14.882055 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:15.883900 kubelet[2020]: E1002 19:19:15.883825 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:16.884901 kubelet[2020]: E1002 19:19:16.884852 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:17.886435 kubelet[2020]: E1002 19:19:17.886391 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.887762 kubelet[2020]: E1002 19:19:18.887689 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:18.904708 kubelet[2020]: E1002 19:19:18.904662 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:19.888585 kubelet[2020]: E1002 19:19:19.888484 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:20.889669 kubelet[2020]: E1002 19:19:20.889587 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:21.149826 kubelet[2020]: E1002 19:19:21.149244 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:19:21.890321 kubelet[2020]: E1002 19:19:21.890253 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:22.890931 kubelet[2020]: E1002 19:19:22.890765 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.891412 kubelet[2020]: E1002 19:19:23.891336 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:23.906240 kubelet[2020]: E1002 19:19:23.906186 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:24.891963 kubelet[2020]: E1002 19:19:24.891864 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:25.892995 kubelet[2020]: E1002 19:19:25.892941 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:26.894055 kubelet[2020]: E1002 19:19:26.893985 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:27.894950 kubelet[2020]: E1002 19:19:27.894880 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.895775 kubelet[2020]: E1002 19:19:28.895730 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:28.907717 kubelet[2020]: E1002 19:19:28.907675 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:29.897142 kubelet[2020]: E1002 19:19:29.897076 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:30.897620 kubelet[2020]: E1002 19:19:30.897563 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:31.898658 kubelet[2020]: E1002 19:19:31.898597 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:32.898821 kubelet[2020]: E1002 19:19:32.898754 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.670402 kubelet[2020]: E1002 19:19:33.670334 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.899546 kubelet[2020]: E1002 19:19:33.899481 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:33.909844 kubelet[2020]: E1002 19:19:33.909814 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:34.900471 kubelet[2020]: E1002 19:19:34.900427 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:35.152476 env[1566]: time="2023-10-02T19:19:35.152051739Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:19:35.177865 env[1566]: time="2023-10-02T19:19:35.177798534Z" level=info msg="CreateContainer within sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\"" Oct 2 19:19:35.179284 env[1566]: time="2023-10-02T19:19:35.179204516Z" level=info msg="StartContainer for \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\"" Oct 2 19:19:35.225372 systemd[1]: Started cri-containerd-c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7.scope. Oct 2 19:19:35.237585 systemd[1]: run-containerd-runc-k8s.io-c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7-runc.LCbxGm.mount: Deactivated successfully. Oct 2 19:19:35.269355 systemd[1]: cri-containerd-c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7.scope: Deactivated successfully. Oct 2 19:19:35.291411 env[1566]: time="2023-10-02T19:19:35.291338122Z" level=info msg="shim disconnected" id=c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7 Oct 2 19:19:35.291735 env[1566]: time="2023-10-02T19:19:35.291414982Z" level=warning msg="cleaning up after shim disconnected" id=c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7 namespace=k8s.io Oct 2 19:19:35.291735 env[1566]: time="2023-10-02T19:19:35.291437939Z" level=info msg="cleaning up dead shim" Oct 2 19:19:35.318677 env[1566]: time="2023-10-02T19:19:35.318588232Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3146 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:19:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:19:35.319175 env[1566]: time="2023-10-02T19:19:35.319063228Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:19:35.319596 env[1566]: time="2023-10-02T19:19:35.319535693Z" level=error msg="Failed to pipe stderr of container \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\"" error="reading from a closed fifo" Oct 2 19:19:35.323256 env[1566]: time="2023-10-02T19:19:35.323178347Z" level=error msg="Failed to pipe stdout of container \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\"" error="reading from a closed fifo" Oct 2 19:19:35.325822 env[1566]: time="2023-10-02T19:19:35.325733139Z" level=error msg="StartContainer for \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:19:35.326378 kubelet[2020]: E1002 19:19:35.326303 2020 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7" Oct 2 19:19:35.326576 kubelet[2020]: E1002 19:19:35.326536 2020 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:19:35.326576 kubelet[2020]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:19:35.326576 kubelet[2020]: rm /hostbin/cilium-mount Oct 2 19:19:35.326576 kubelet[2020]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fmbww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:19:35.326877 kubelet[2020]: E1002 19:19:35.326623 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:19:35.789918 kubelet[2020]: I1002 19:19:35.789858 2020 scope.go:115] "RemoveContainer" containerID="0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f" Oct 2 19:19:35.790612 kubelet[2020]: I1002 19:19:35.790580 2020 scope.go:115] "RemoveContainer" containerID="0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f" Oct 2 19:19:35.794058 env[1566]: time="2023-10-02T19:19:35.793983042Z" level=info msg="RemoveContainer for \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\"" Oct 2 19:19:35.794475 env[1566]: time="2023-10-02T19:19:35.794142078Z" level=info msg="RemoveContainer for \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\"" Oct 2 19:19:35.794777 env[1566]: time="2023-10-02T19:19:35.794679463Z" level=error msg="RemoveContainer for \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\" failed" error="failed to set removing state for container \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\": container is already in removing state" Oct 2 19:19:35.797356 kubelet[2020]: E1002 19:19:35.797244 2020 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\": container is already in removing state" containerID="0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f" Oct 2 19:19:35.797356 kubelet[2020]: I1002 19:19:35.797314 2020 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f} err="rpc error: code = Unknown desc = failed to set removing state for container \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\": container is already in removing state" Oct 2 19:19:35.799041 env[1566]: time="2023-10-02T19:19:35.798973021Z" level=info msg="RemoveContainer for \"0c7c26ad9ffc6448b309186c9a5ded5759b3f2e2c281c8bfc28a8db4c5f6e46f\" returns successfully" Oct 2 19:19:35.799841 kubelet[2020]: E1002 19:19:35.799808 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:19:35.901375 kubelet[2020]: E1002 19:19:35.901328 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:36.168191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7-rootfs.mount: Deactivated successfully. Oct 2 19:19:36.902353 kubelet[2020]: E1002 19:19:36.902286 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:37.903069 kubelet[2020]: E1002 19:19:37.903003 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.398131 kubelet[2020]: W1002 19:19:38.398067 2020 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dffdf61_0425_4cf2_8b39_ee68b2581a4a.slice/cri-containerd-c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7.scope WatchSource:0}: task c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7 not found: not found Oct 2 19:19:38.903650 kubelet[2020]: E1002 19:19:38.903586 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:38.911564 kubelet[2020]: E1002 19:19:38.911535 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:39.904563 kubelet[2020]: E1002 19:19:39.904498 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:40.904947 kubelet[2020]: E1002 19:19:40.904873 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:41.906138 kubelet[2020]: E1002 19:19:41.906066 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:42.906748 kubelet[2020]: E1002 19:19:42.906681 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:43.907608 kubelet[2020]: E1002 19:19:43.907570 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:43.913406 kubelet[2020]: E1002 19:19:43.913361 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:44.909349 kubelet[2020]: E1002 19:19:44.909280 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:45.909540 kubelet[2020]: E1002 19:19:45.909464 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:46.909997 kubelet[2020]: E1002 19:19:46.909950 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:47.149457 kubelet[2020]: E1002 19:19:47.149411 2020 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-t7627_kube-system(8dffdf61-0425-4cf2-8b39-ee68b2581a4a)\"" pod="kube-system/cilium-t7627" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a Oct 2 19:19:47.910920 kubelet[2020]: E1002 19:19:47.910859 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:48.911029 kubelet[2020]: E1002 19:19:48.910987 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:48.914627 kubelet[2020]: E1002 19:19:48.914583 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:49.912271 kubelet[2020]: E1002 19:19:49.912193 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.450081 env[1566]: time="2023-10-02T19:19:50.450025252Z" level=info msg="StopPodSandbox for \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\"" Oct 2 19:19:50.450795 env[1566]: time="2023-10-02T19:19:50.450735222Z" level=info msg="Container to stop \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:19:50.453869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87-shm.mount: Deactivated successfully. Oct 2 19:19:50.473818 systemd[1]: cri-containerd-1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87.scope: Deactivated successfully. Oct 2 19:19:50.478261 kernel: kauditd_printk_skb: 226 callbacks suppressed Oct 2 19:19:50.478444 kernel: audit: type=1334 audit(1696274390.473:801): prog-id=92 op=UNLOAD Oct 2 19:19:50.473000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:19:50.480000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:19:50.485026 kernel: audit: type=1334 audit(1696274390.480:802): prog-id=95 op=UNLOAD Oct 2 19:19:50.493886 env[1566]: time="2023-10-02T19:19:50.493824546Z" level=info msg="StopContainer for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" with timeout 30 (s)" Oct 2 19:19:50.499201 env[1566]: time="2023-10-02T19:19:50.499082779Z" level=info msg="Stop container \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" with signal terminated" Oct 2 19:19:50.538553 systemd[1]: cri-containerd-dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc.scope: Deactivated successfully. Oct 2 19:19:50.538000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:19:50.542994 kernel: audit: type=1334 audit(1696274390.538:803): prog-id=96 op=UNLOAD Oct 2 19:19:50.544000 audit: BPF prog-id=99 op=UNLOAD Oct 2 19:19:50.549215 kernel: audit: type=1334 audit(1696274390.544:804): prog-id=99 op=UNLOAD Oct 2 19:19:50.556853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87-rootfs.mount: Deactivated successfully. Oct 2 19:19:50.574898 env[1566]: time="2023-10-02T19:19:50.574823357Z" level=info msg="shim disconnected" id=1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87 Oct 2 19:19:50.575698 env[1566]: time="2023-10-02T19:19:50.574900241Z" level=warning msg="cleaning up after shim disconnected" id=1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87 namespace=k8s.io Oct 2 19:19:50.575844 env[1566]: time="2023-10-02T19:19:50.575693923Z" level=info msg="cleaning up dead shim" Oct 2 19:19:50.606766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc-rootfs.mount: Deactivated successfully. Oct 2 19:19:50.617667 env[1566]: time="2023-10-02T19:19:50.617602673Z" level=info msg="shim disconnected" id=dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc Oct 2 19:19:50.618121 env[1566]: time="2023-10-02T19:19:50.618071898Z" level=warning msg="cleaning up after shim disconnected" id=dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc namespace=k8s.io Oct 2 19:19:50.618321 env[1566]: time="2023-10-02T19:19:50.618281742Z" level=info msg="cleaning up dead shim" Oct 2 19:19:50.619961 env[1566]: time="2023-10-02T19:19:50.619843942Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3188 runtime=io.containerd.runc.v2\n" Oct 2 19:19:50.620509 env[1566]: time="2023-10-02T19:19:50.620454408Z" level=info msg="TearDown network for sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" successfully" Oct 2 19:19:50.620628 env[1566]: time="2023-10-02T19:19:50.620508480Z" level=info msg="StopPodSandbox for \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" returns successfully" Oct 2 19:19:50.651184 env[1566]: time="2023-10-02T19:19:50.651125789Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3207 runtime=io.containerd.runc.v2\n" Oct 2 19:19:50.656735 env[1566]: time="2023-10-02T19:19:50.656668447Z" level=info msg="StopContainer for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" returns successfully" Oct 2 19:19:50.657743 env[1566]: time="2023-10-02T19:19:50.657660957Z" level=info msg="StopPodSandbox for \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\"" Oct 2 19:19:50.661857 env[1566]: time="2023-10-02T19:19:50.657761745Z" level=info msg="Container to stop \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:19:50.660456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c-shm.mount: Deactivated successfully. Oct 2 19:19:50.678000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:19:50.679359 systemd[1]: cri-containerd-189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c.scope: Deactivated successfully. Oct 2 19:19:50.682973 kernel: audit: type=1334 audit(1696274390.678:805): prog-id=88 op=UNLOAD Oct 2 19:19:50.683000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:19:50.687991 kernel: audit: type=1334 audit(1696274390.683:806): prog-id=91 op=UNLOAD Oct 2 19:19:50.735406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c-rootfs.mount: Deactivated successfully. Oct 2 19:19:50.748297 env[1566]: time="2023-10-02T19:19:50.748198016Z" level=info msg="shim disconnected" id=189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c Oct 2 19:19:50.748297 env[1566]: time="2023-10-02T19:19:50.748291436Z" level=warning msg="cleaning up after shim disconnected" id=189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c namespace=k8s.io Oct 2 19:19:50.748654 env[1566]: time="2023-10-02T19:19:50.748314776Z" level=info msg="cleaning up dead shim" Oct 2 19:19:50.764966 kubelet[2020]: I1002 19:19:50.763475 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-etc-cni-netd\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.764966 kubelet[2020]: I1002 19:19:50.763547 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-cgroup\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.764966 kubelet[2020]: I1002 19:19:50.763599 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmbww\" (UniqueName: \"kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-kube-api-access-fmbww\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.764966 kubelet[2020]: I1002 19:19:50.763604 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.764966 kubelet[2020]: I1002 19:19:50.763647 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-config-path\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.764966 kubelet[2020]: I1002 19:19:50.763658 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.765491 kubelet[2020]: I1002 19:19:50.763689 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-kernel\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765491 kubelet[2020]: I1002 19:19:50.763728 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-net\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765491 kubelet[2020]: I1002 19:19:50.763765 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-bpf-maps\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765491 kubelet[2020]: I1002 19:19:50.763803 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cni-path\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765491 kubelet[2020]: I1002 19:19:50.763846 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-ipsec-secrets\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765491 kubelet[2020]: I1002 19:19:50.763886 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-lib-modules\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.763947 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-xtables-lock\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.763996 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hubble-tls\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.764040 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-clustermesh-secrets\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.764080 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-run\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.764119 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hostproc\") pod \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\" (UID: \"8dffdf61-0425-4cf2-8b39-ee68b2581a4a\") " Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.764177 2020 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-etc-cni-netd\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.765848 kubelet[2020]: I1002 19:19:50.764203 2020 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-cgroup\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.766330 kubelet[2020]: I1002 19:19:50.764243 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hostproc" (OuterVolumeSpecName: "hostproc") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.766330 kubelet[2020]: W1002 19:19:50.764541 2020 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/8dffdf61-0425-4cf2-8b39-ee68b2581a4a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:19:50.773778 kubelet[2020]: I1002 19:19:50.773694 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:19:50.774007 kubelet[2020]: I1002 19:19:50.773826 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.774007 kubelet[2020]: I1002 19:19:50.773945 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.774683 kubelet[2020]: I1002 19:19:50.774582 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.774850 kubelet[2020]: I1002 19:19:50.774684 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.774850 kubelet[2020]: I1002 19:19:50.774733 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.774850 kubelet[2020]: I1002 19:19:50.774774 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cni-path" (OuterVolumeSpecName: "cni-path") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.775285 kubelet[2020]: I1002 19:19:50.774969 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-kube-api-access-fmbww" (OuterVolumeSpecName: "kube-api-access-fmbww") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "kube-api-access-fmbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:50.775667 kubelet[2020]: I1002 19:19:50.775601 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:19:50.794870 env[1566]: time="2023-10-02T19:19:50.794766593Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3239 runtime=io.containerd.runc.v2\n" Oct 2 19:19:50.795543 env[1566]: time="2023-10-02T19:19:50.795460603Z" level=info msg="TearDown network for sandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" successfully" Oct 2 19:19:50.795543 env[1566]: time="2023-10-02T19:19:50.795528391Z" level=info msg="StopPodSandbox for \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" returns successfully" Oct 2 19:19:50.799371 kubelet[2020]: I1002 19:19:50.799293 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:19:50.801368 kubelet[2020]: I1002 19:19:50.800559 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:19:50.804415 kubelet[2020]: I1002 19:19:50.804351 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8dffdf61-0425-4cf2-8b39-ee68b2581a4a" (UID: "8dffdf61-0425-4cf2-8b39-ee68b2581a4a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:50.825583 kubelet[2020]: I1002 19:19:50.825532 2020 scope.go:115] "RemoveContainer" containerID="c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7" Oct 2 19:19:50.829664 env[1566]: time="2023-10-02T19:19:50.829250635Z" level=info msg="RemoveContainer for \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\"" Oct 2 19:19:50.833702 env[1566]: time="2023-10-02T19:19:50.833535426Z" level=info msg="RemoveContainer for \"c05ef71cb16ddfe108168b3609c8abd04ffbcdfa3823e1b7fe1cb21abf4c6df7\" returns successfully" Oct 2 19:19:50.836480 kubelet[2020]: I1002 19:19:50.836446 2020 scope.go:115] "RemoveContainer" containerID="dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc" Oct 2 19:19:50.837823 systemd[1]: Removed slice kubepods-burstable-pod8dffdf61_0425_4cf2_8b39_ee68b2581a4a.slice. Oct 2 19:19:50.843821 env[1566]: time="2023-10-02T19:19:50.843734264Z" level=info msg="RemoveContainer for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\"" Oct 2 19:19:50.849213 env[1566]: time="2023-10-02T19:19:50.849099549Z" level=info msg="RemoveContainer for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" returns successfully" Oct 2 19:19:50.850004 kubelet[2020]: I1002 19:19:50.849926 2020 scope.go:115] "RemoveContainer" containerID="dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc" Oct 2 19:19:50.850900 env[1566]: time="2023-10-02T19:19:50.850689829Z" level=error msg="ContainerStatus for \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\": not found" Oct 2 19:19:50.851420 kubelet[2020]: E1002 19:19:50.851382 2020 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\": not found" containerID="dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc" Oct 2 19:19:50.851668 kubelet[2020]: I1002 19:19:50.851638 2020 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc} err="failed to get container status \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"dde0710ad63cf9d29366ed65f1eea6035066f0dfd04f9246d2d6fa6eeb2ccdbc\": not found" Oct 2 19:19:50.864697 kubelet[2020]: I1002 19:19:50.864631 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpjlq\" (UniqueName: \"kubernetes.io/projected/9c85b917-4d6b-43e2-a667-246e729fc023-kube-api-access-fpjlq\") pod \"9c85b917-4d6b-43e2-a667-246e729fc023\" (UID: \"9c85b917-4d6b-43e2-a667-246e729fc023\") " Oct 2 19:19:50.864983 kubelet[2020]: I1002 19:19:50.864715 2020 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c85b917-4d6b-43e2-a667-246e729fc023-cilium-config-path\") pod \"9c85b917-4d6b-43e2-a667-246e729fc023\" (UID: \"9c85b917-4d6b-43e2-a667-246e729fc023\") " Oct 2 19:19:50.864983 kubelet[2020]: I1002 19:19:50.864765 2020 reconciler.go:399] "Volume detached for volume \"kube-api-access-fmbww\" (UniqueName: \"kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-kube-api-access-fmbww\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.864983 kubelet[2020]: I1002 19:19:50.864796 2020 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-kernel\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.864983 kubelet[2020]: I1002 19:19:50.864824 2020 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-config-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.864983 kubelet[2020]: I1002 19:19:50.864849 2020 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-ipsec-secrets\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865006 2020 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-host-proc-sys-net\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865047 2020 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-bpf-maps\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865072 2020 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cni-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865095 2020 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hubble-tls\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865120 2020 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-clustermesh-secrets\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865144 2020 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-cilium-run\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865167 2020 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-lib-modules\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865191 2020 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-xtables-lock\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.865372 kubelet[2020]: I1002 19:19:50.865266 2020 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8dffdf61-0425-4cf2-8b39-ee68b2581a4a-hostproc\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.866009 kubelet[2020]: W1002 19:19:50.865637 2020 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/9c85b917-4d6b-43e2-a667-246e729fc023/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:19:50.871139 kubelet[2020]: I1002 19:19:50.871078 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c85b917-4d6b-43e2-a667-246e729fc023-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c85b917-4d6b-43e2-a667-246e729fc023" (UID: "9c85b917-4d6b-43e2-a667-246e729fc023"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:19:50.876756 kubelet[2020]: I1002 19:19:50.876667 2020 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c85b917-4d6b-43e2-a667-246e729fc023-kube-api-access-fpjlq" (OuterVolumeSpecName: "kube-api-access-fpjlq") pod "9c85b917-4d6b-43e2-a667-246e729fc023" (UID: "9c85b917-4d6b-43e2-a667-246e729fc023"). InnerVolumeSpecName "kube-api-access-fpjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:19:50.913028 kubelet[2020]: E1002 19:19:50.912972 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:50.966309 kubelet[2020]: I1002 19:19:50.966268 2020 reconciler.go:399] "Volume detached for volume \"kube-api-access-fpjlq\" (UniqueName: \"kubernetes.io/projected/9c85b917-4d6b-43e2-a667-246e729fc023-kube-api-access-fpjlq\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:50.966478 kubelet[2020]: I1002 19:19:50.966318 2020 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c85b917-4d6b-43e2-a667-246e729fc023-cilium-config-path\") on node \"172.31.21.101\" DevicePath \"\"" Oct 2 19:19:51.140432 systemd[1]: Removed slice kubepods-besteffort-pod9c85b917_4d6b_43e2_a667_246e729fc023.slice. Oct 2 19:19:51.453334 systemd[1]: var-lib-kubelet-pods-8dffdf61\x2d0425\x2d4cf2\x2d8b39\x2dee68b2581a4a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmbww.mount: Deactivated successfully. Oct 2 19:19:51.453516 systemd[1]: var-lib-kubelet-pods-8dffdf61\x2d0425\x2d4cf2\x2d8b39\x2dee68b2581a4a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:19:51.453648 systemd[1]: var-lib-kubelet-pods-8dffdf61\x2d0425\x2d4cf2\x2d8b39\x2dee68b2581a4a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:19:51.453787 systemd[1]: var-lib-kubelet-pods-8dffdf61\x2d0425\x2d4cf2\x2d8b39\x2dee68b2581a4a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:19:51.453936 systemd[1]: var-lib-kubelet-pods-9c85b917\x2d4d6b\x2d43e2\x2da667\x2d246e729fc023-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpjlq.mount: Deactivated successfully. Oct 2 19:19:51.913939 kubelet[2020]: E1002 19:19:51.913863 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:52.154714 kubelet[2020]: I1002 19:19:52.154654 2020 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8dffdf61-0425-4cf2-8b39-ee68b2581a4a path="/var/lib/kubelet/pods/8dffdf61-0425-4cf2-8b39-ee68b2581a4a/volumes" Oct 2 19:19:52.155929 kubelet[2020]: I1002 19:19:52.155870 2020 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9c85b917-4d6b-43e2-a667-246e729fc023 path="/var/lib/kubelet/pods/9c85b917-4d6b-43e2-a667-246e729fc023/volumes" Oct 2 19:19:52.915609 kubelet[2020]: E1002 19:19:52.915439 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:53.670308 kubelet[2020]: E1002 19:19:53.670244 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:53.915292 kubelet[2020]: E1002 19:19:53.915238 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:53.915700 kubelet[2020]: E1002 19:19:53.915672 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:54.916632 kubelet[2020]: E1002 19:19:54.916568 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:55.916971 kubelet[2020]: E1002 19:19:55.916891 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:56.917832 kubelet[2020]: E1002 19:19:56.917762 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:57.918078 kubelet[2020]: E1002 19:19:57.917995 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:58.078656 amazon-ssm-agent[1534]: 2023-10-02 19:19:58 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:19:58.180142 amazon-ssm-agent[1534]: 2023-10-02 19:19:58 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-0ad405c7a2916f0a4 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-0ad405c7a2916f0a4 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:19:58.180142 amazon-ssm-agent[1534]: status code: 400, request id: 27da7d33-01da-46e0-a622-adf9d224a4cf Oct 2 19:19:58.916171 kubelet[2020]: E1002 19:19:58.916128 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:19:58.919756 kubelet[2020]: E1002 19:19:58.919726 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:19:59.921021 kubelet[2020]: E1002 19:19:59.920980 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:00.921895 kubelet[2020]: E1002 19:20:00.921855 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:01.922670 kubelet[2020]: E1002 19:20:01.922615 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:02.923832 kubelet[2020]: E1002 19:20:02.923778 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:03.916722 kubelet[2020]: E1002 19:20:03.916690 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:03.925361 kubelet[2020]: E1002 19:20:03.925335 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:04.926400 kubelet[2020]: E1002 19:20:04.926350 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:05.927623 kubelet[2020]: E1002 19:20:05.927567 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:06.092737 kubelet[2020]: E1002 19:20:06.092644 2020 controller.go:187] failed to update lease, error: Put "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": unexpected EOF Oct 2 19:20:06.093982 kubelet[2020]: E1002 19:20:06.093879 2020 controller.go:187] failed to update lease, error: Put "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": dial tcp 172.31.23.67:6443: connect: connection refused Oct 2 19:20:06.094382 kubelet[2020]: E1002 19:20:06.094340 2020 controller.go:187] failed to update lease, error: Put "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": dial tcp 172.31.23.67:6443: connect: connection refused Oct 2 19:20:06.094955 kubelet[2020]: E1002 19:20:06.094880 2020 controller.go:187] failed to update lease, error: Put "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": dial tcp 172.31.23.67:6443: connect: connection refused Oct 2 19:20:06.095692 kubelet[2020]: E1002 19:20:06.095647 2020 controller.go:187] failed to update lease, error: Put "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": dial tcp 172.31.23.67:6443: connect: connection refused Oct 2 19:20:06.095838 kubelet[2020]: I1002 19:20:06.095695 2020 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Oct 2 19:20:06.096266 kubelet[2020]: E1002 19:20:06.096231 2020 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": dial tcp 172.31.23.67:6443: connect: connection refused Oct 2 19:20:06.927946 kubelet[2020]: E1002 19:20:06.927878 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:07.928762 kubelet[2020]: E1002 19:20:07.928698 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:08.918632 kubelet[2020]: E1002 19:20:08.918596 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:08.929341 kubelet[2020]: E1002 19:20:08.929287 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:09.929603 kubelet[2020]: E1002 19:20:09.929542 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:10.929740 kubelet[2020]: E1002 19:20:10.929690 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:11.930694 kubelet[2020]: E1002 19:20:11.930641 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:12.931485 kubelet[2020]: E1002 19:20:12.931436 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.669362 kubelet[2020]: E1002 19:20:13.669294 2020 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:13.725388 env[1566]: time="2023-10-02T19:20:13.725312505Z" level=info msg="StopPodSandbox for \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\"" Oct 2 19:20:13.726019 env[1566]: time="2023-10-02T19:20:13.725451670Z" level=info msg="TearDown network for sandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" successfully" Oct 2 19:20:13.726019 env[1566]: time="2023-10-02T19:20:13.725511478Z" level=info msg="StopPodSandbox for \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" returns successfully" Oct 2 19:20:13.726499 env[1566]: time="2023-10-02T19:20:13.726437389Z" level=info msg="RemovePodSandbox for \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\"" Oct 2 19:20:13.726600 env[1566]: time="2023-10-02T19:20:13.726514237Z" level=info msg="Forcibly stopping sandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\"" Oct 2 19:20:13.726719 env[1566]: time="2023-10-02T19:20:13.726681386Z" level=info msg="TearDown network for sandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" successfully" Oct 2 19:20:13.731541 env[1566]: time="2023-10-02T19:20:13.731472919Z" level=info msg="RemovePodSandbox \"189506401bae42c983fcb64c23906eab76fb55a5c963e190088c57f01741ea2c\" returns successfully" Oct 2 19:20:13.732316 env[1566]: time="2023-10-02T19:20:13.732267130Z" level=info msg="StopPodSandbox for \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\"" Oct 2 19:20:13.732468 env[1566]: time="2023-10-02T19:20:13.732402707Z" level=info msg="TearDown network for sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" successfully" Oct 2 19:20:13.732544 env[1566]: time="2023-10-02T19:20:13.732468587Z" level=info msg="StopPodSandbox for \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" returns successfully" Oct 2 19:20:13.733078 env[1566]: time="2023-10-02T19:20:13.733039705Z" level=info msg="RemovePodSandbox for \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\"" Oct 2 19:20:13.733286 env[1566]: time="2023-10-02T19:20:13.733229450Z" level=info msg="Forcibly stopping sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\"" Oct 2 19:20:13.733509 env[1566]: time="2023-10-02T19:20:13.733473903Z" level=info msg="TearDown network for sandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" successfully" Oct 2 19:20:13.737567 env[1566]: time="2023-10-02T19:20:13.737514761Z" level=info msg="RemovePodSandbox \"1968a7cc7e63e662fa3fb4ca193b0e32ef798cda1da7664530e0ed9418ecfd87\" returns successfully" Oct 2 19:20:13.756023 kubelet[2020]: W1002 19:20:13.755988 2020 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:20:13.920944 kubelet[2020]: E1002 19:20:13.919993 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:13.932516 kubelet[2020]: E1002 19:20:13.932459 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:14.933437 kubelet[2020]: E1002 19:20:14.933404 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:15.934734 kubelet[2020]: E1002 19:20:15.934702 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:16.298894 kubelet[2020]: E1002 19:20:16.298439 2020 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.23.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.101?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Oct 2 19:20:16.936482 kubelet[2020]: E1002 19:20:16.936425 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:17.937557 kubelet[2020]: E1002 19:20:17.937503 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:18.921279 kubelet[2020]: E1002 19:20:18.921227 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:18.937780 kubelet[2020]: E1002 19:20:18.937755 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:19.938571 kubelet[2020]: E1002 19:20:19.938538 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:20.939917 kubelet[2020]: E1002 19:20:20.939864 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:21.555330 kubelet[2020]: E1002 19:20:21.555277 2020 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.21.101\": Get \"https://172.31.23.67:6443/api/v1/nodes/172.31.21.101?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 2 19:20:21.940926 kubelet[2020]: E1002 19:20:21.940841 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:22.941403 kubelet[2020]: E1002 19:20:22.941351 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:23.922493 kubelet[2020]: E1002 19:20:23.922440 2020 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:20:23.942047 kubelet[2020]: E1002 19:20:23.942021 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:20:24.942944 kubelet[2020]: E1002 19:20:24.942883 2020 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"