Oct 2 19:55:58.776003 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:55:58.776143 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:55:58.776153 kernel: efi: EFI v2.70 by EDK II Oct 2 19:55:58.776159 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:55:58.776164 kernel: random: crng init done Oct 2 19:55:58.776170 kernel: ACPI: Early table checksum verification disabled Oct 2 19:55:58.776176 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:55:58.776184 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:55:58.776190 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776195 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776201 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776206 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776212 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776217 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776225 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776244 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776249 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:58.776255 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:55:58.776261 kernel: NUMA: Failed to initialise from firmware Oct 2 19:55:58.776267 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:58.776272 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Oct 2 19:55:58.776278 kernel: Zone ranges: Oct 2 19:55:58.776284 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:58.776291 kernel: DMA32 empty Oct 2 19:55:58.776296 kernel: Normal empty Oct 2 19:55:58.776302 kernel: Movable zone start for each node Oct 2 19:55:58.776308 kernel: Early memory node ranges Oct 2 19:55:58.776313 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:55:58.776319 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:55:58.776325 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:55:58.776331 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:55:58.776336 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:55:58.776342 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:55:58.776347 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:55:58.776353 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:58.776360 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:55:58.776365 kernel: psci: probing for conduit method from ACPI. Oct 2 19:55:58.776371 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:55:58.776377 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:55:58.776382 kernel: psci: Trusted OS migration not required Oct 2 19:55:58.776391 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:55:58.776397 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:55:58.776405 kernel: ACPI: SRAT not present Oct 2 19:55:58.776412 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:55:58.776418 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:55:58.776424 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:55:58.776430 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:55:58.776436 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:55:58.776442 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:55:58.776448 kernel: CPU features: detected: Spectre-v4 Oct 2 19:55:58.776454 kernel: CPU features: detected: Spectre-BHB Oct 2 19:55:58.776461 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:55:58.776467 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:55:58.776473 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:55:58.776480 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:55:58.776487 kernel: Policy zone: DMA Oct 2 19:55:58.776494 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:55:58.776501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:55:58.776507 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:55:58.776513 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:55:58.776519 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:55:58.776525 kernel: Memory: 2459280K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113008K reserved, 0K cma-reserved) Oct 2 19:55:58.776533 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:55:58.776539 kernel: trace event string verifier disabled Oct 2 19:55:58.776545 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:55:58.776551 kernel: rcu: RCU event tracing is enabled. Oct 2 19:55:58.776557 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:55:58.776564 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:55:58.776570 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:55:58.776576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:55:58.776582 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:55:58.776588 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:55:58.776594 kernel: GICv3: 256 SPIs implemented Oct 2 19:55:58.776601 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:55:58.776607 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:55:58.776613 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:55:58.776619 kernel: GICv3: 16 PPIs implemented Oct 2 19:55:58.776625 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:55:58.776631 kernel: ACPI: SRAT not present Oct 2 19:55:58.776637 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:55:58.776644 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:55:58.776650 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:55:58.776656 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:55:58.776662 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:55:58.776668 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:58.776675 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:55:58.776682 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:55:58.776689 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:55:58.776695 kernel: arm-pv: using stolen time PV Oct 2 19:55:58.776701 kernel: Console: colour dummy device 80x25 Oct 2 19:55:58.776708 kernel: ACPI: Core revision 20210730 Oct 2 19:55:58.776714 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:55:58.776720 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:55:58.776727 kernel: LSM: Security Framework initializing Oct 2 19:55:58.776733 kernel: SELinux: Initializing. Oct 2 19:55:58.776740 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:55:58.776747 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:55:58.776754 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:55:58.776760 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:55:58.776766 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:55:58.776772 kernel: Remapping and enabling EFI services. Oct 2 19:55:58.776779 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:55:58.776785 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:55:58.776792 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:55:58.776799 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:55:58.776806 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:58.776812 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:55:58.776818 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:55:58.776825 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:55:58.776831 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:55:58.776838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:58.776844 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:55:58.776850 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:55:58.776856 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:55:58.776864 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:55:58.776870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:58.776876 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:55:58.776882 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:55:58.776893 kernel: SMP: Total of 4 processors activated. Oct 2 19:55:58.776901 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:55:58.776915 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:55:58.776922 kernel: CPU features: detected: Common not Private translations Oct 2 19:55:58.776928 kernel: CPU features: detected: CRC32 instructions Oct 2 19:55:58.776935 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:55:58.776941 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:55:58.776948 kernel: CPU features: detected: Privileged Access Never Oct 2 19:55:58.776956 kernel: CPU features: detected: RAS Extension Support Oct 2 19:55:58.776963 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:55:58.776970 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:55:58.776976 kernel: alternatives: patching kernel code Oct 2 19:55:58.776984 kernel: devtmpfs: initialized Oct 2 19:55:58.776991 kernel: KASLR enabled Oct 2 19:55:58.776998 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:55:58.777005 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:55:58.777028 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:55:58.777036 kernel: SMBIOS 3.0.0 present. Oct 2 19:55:58.777044 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:55:58.777051 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:55:58.777058 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:55:58.777065 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:55:58.777074 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:55:58.777081 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:55:58.777088 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Oct 2 19:55:58.777095 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:55:58.777102 kernel: cpuidle: using governor menu Oct 2 19:55:58.777108 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:55:58.777115 kernel: ASID allocator initialised with 32768 entries Oct 2 19:55:58.777122 kernel: ACPI: bus type PCI registered Oct 2 19:55:58.777129 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:55:58.777136 kernel: Serial: AMBA PL011 UART driver Oct 2 19:55:58.777143 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:55:58.777150 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:55:58.777157 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:55:58.777164 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:55:58.777170 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:55:58.777177 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:55:58.777184 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:55:58.777190 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:55:58.777198 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:55:58.777205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:55:58.777212 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:55:58.777219 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:55:58.777226 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:55:58.777232 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:55:58.777239 kernel: ACPI: Interpreter enabled Oct 2 19:55:58.777246 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:55:58.777252 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:55:58.777260 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:55:58.777267 kernel: printk: console [ttyAMA0] enabled Oct 2 19:55:58.777273 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:55:58.777428 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:55:58.777494 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:55:58.777552 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:55:58.777611 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:55:58.777674 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:55:58.777683 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:55:58.777690 kernel: PCI host bridge to bus 0000:00 Oct 2 19:55:58.777755 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:55:58.777810 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:55:58.777866 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:55:58.777936 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:55:58.778030 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:55:58.778110 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:55:58.778665 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:55:58.778736 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:55:58.778796 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:55:58.778856 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:55:58.778932 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:55:58.779002 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:55:58.779072 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:55:58.779124 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:55:58.779177 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:55:58.779186 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:55:58.779193 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:55:58.779200 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:55:58.779209 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:55:58.779215 kernel: iommu: Default domain type: Translated Oct 2 19:55:58.779222 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:55:58.779247 kernel: vgaarb: loaded Oct 2 19:55:58.779256 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:55:58.779262 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:55:58.779269 kernel: PTP clock support registered Oct 2 19:55:58.779276 kernel: Registered efivars operations Oct 2 19:55:58.779283 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:55:58.779290 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:55:58.779298 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:55:58.779305 kernel: pnp: PnP ACPI init Oct 2 19:55:58.779376 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:55:58.779386 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:55:58.779393 kernel: NET: Registered PF_INET protocol family Oct 2 19:55:58.779400 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:55:58.779407 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:55:58.779415 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:55:58.779424 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:55:58.779431 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:55:58.779437 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:55:58.779444 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:55:58.779451 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:55:58.779458 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:55:58.779464 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:55:58.779471 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:55:58.779479 kernel: kvm [1]: HYP mode not available Oct 2 19:55:58.779486 kernel: Initialise system trusted keyrings Oct 2 19:55:58.779493 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:55:58.779499 kernel: Key type asymmetric registered Oct 2 19:55:58.779506 kernel: Asymmetric key parser 'x509' registered Oct 2 19:55:58.779513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:55:58.779519 kernel: io scheduler mq-deadline registered Oct 2 19:55:58.779526 kernel: io scheduler kyber registered Oct 2 19:55:58.779532 kernel: io scheduler bfq registered Oct 2 19:55:58.779539 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:55:58.779548 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:55:58.779555 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:55:58.779618 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:55:58.779627 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:55:58.779634 kernel: thunder_xcv, ver 1.0 Oct 2 19:55:58.779640 kernel: thunder_bgx, ver 1.0 Oct 2 19:55:58.779647 kernel: nicpf, ver 1.0 Oct 2 19:55:58.779654 kernel: nicvf, ver 1.0 Oct 2 19:55:58.779724 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:55:58.779852 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:55:58 UTC (1696276558) Oct 2 19:55:58.779863 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:55:58.779870 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:55:58.779876 kernel: Segment Routing with IPv6 Oct 2 19:55:58.779883 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:55:58.779890 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:55:58.779897 kernel: Key type dns_resolver registered Oct 2 19:55:58.779911 kernel: registered taskstats version 1 Oct 2 19:55:58.779923 kernel: Loading compiled-in X.509 certificates Oct 2 19:55:58.779930 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:55:58.779937 kernel: Key type .fscrypt registered Oct 2 19:55:58.779943 kernel: Key type fscrypt-provisioning registered Oct 2 19:55:58.779950 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:55:58.779957 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:55:58.779963 kernel: ima: No architecture policies found Oct 2 19:55:58.779970 kernel: Freeing unused kernel memory: 34560K Oct 2 19:55:58.779977 kernel: Run /init as init process Oct 2 19:55:58.779985 kernel: with arguments: Oct 2 19:55:58.779991 kernel: /init Oct 2 19:55:58.779998 kernel: with environment: Oct 2 19:55:58.780004 kernel: HOME=/ Oct 2 19:55:58.780010 kernel: TERM=linux Oct 2 19:55:58.780027 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:55:58.780036 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:55:58.780045 systemd[1]: Detected virtualization kvm. Oct 2 19:55:58.780054 systemd[1]: Detected architecture arm64. Oct 2 19:55:58.780061 systemd[1]: Running in initrd. Oct 2 19:55:58.780069 systemd[1]: No hostname configured, using default hostname. Oct 2 19:55:58.780076 systemd[1]: Hostname set to . Oct 2 19:55:58.780083 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:55:58.780090 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:55:58.780097 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:55:58.780105 systemd[1]: Reached target cryptsetup.target. Oct 2 19:55:58.780113 systemd[1]: Reached target paths.target. Oct 2 19:55:58.780120 systemd[1]: Reached target slices.target. Oct 2 19:55:58.780127 systemd[1]: Reached target swap.target. Oct 2 19:55:58.780134 systemd[1]: Reached target timers.target. Oct 2 19:55:58.780142 systemd[1]: Listening on iscsid.socket. Oct 2 19:55:58.780149 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:55:58.780156 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:55:58.780164 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:55:58.780171 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:55:58.780179 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:55:58.780186 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:55:58.780193 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:55:58.780200 systemd[1]: Reached target sockets.target. Oct 2 19:55:58.780207 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:55:58.780214 systemd[1]: Finished network-cleanup.service. Oct 2 19:55:58.780221 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:55:58.780229 systemd[1]: Starting systemd-journald.service... Oct 2 19:55:58.780237 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:55:58.780244 systemd[1]: Starting systemd-resolved.service... Oct 2 19:55:58.780251 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:55:58.780258 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:55:58.780265 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:55:58.780272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:55:58.780279 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:55:58.780287 kernel: audit: type=1130 audit(1696276558.777:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.780296 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:55:58.780307 systemd-journald[290]: Journal started Oct 2 19:55:58.780351 systemd-journald[290]: Runtime Journal (/run/log/journal/b1c3f4ad3bc64c45ac50969296877504) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:55:58.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.771590 systemd-modules-load[291]: Inserted module 'overlay' Oct 2 19:55:58.781775 systemd[1]: Started systemd-journald.service. Oct 2 19:55:58.786637 kernel: audit: type=1130 audit(1696276558.782:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.786673 kernel: audit: type=1130 audit(1696276558.784:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.782683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:55:58.791497 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:55:58.797226 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 2 19:55:58.798028 kernel: Bridge firewalling registered Oct 2 19:55:58.801833 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:55:58.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.803303 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:55:58.807354 kernel: audit: type=1130 audit(1696276558.802:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.806153 systemd-resolved[292]: Positive Trust Anchors: Oct 2 19:55:58.806244 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:55:58.806279 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:55:58.811800 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 2 19:55:58.817138 kernel: SCSI subsystem initialized Oct 2 19:55:58.817159 kernel: audit: type=1130 audit(1696276558.814:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.812787 systemd[1]: Started systemd-resolved.service. Oct 2 19:55:58.814943 systemd[1]: Reached target nss-lookup.target. Oct 2 19:55:58.822127 dracut-cmdline[308]: dracut-dracut-053 Oct 2 19:55:58.823823 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:55:58.823850 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:55:58.823887 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:55:58.824818 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:55:58.831166 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 2 19:55:58.831964 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:55:58.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.833436 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:55:58.836231 kernel: audit: type=1130 audit(1696276558.832:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.842661 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:55:58.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.846106 kernel: audit: type=1130 audit(1696276558.843:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.900035 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:55:58.909047 kernel: iscsi: registered transport (tcp) Oct 2 19:55:58.922044 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:55:58.923046 kernel: QLogic iSCSI HBA Driver Oct 2 19:55:58.980437 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:55:58.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.982222 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:55:58.984658 kernel: audit: type=1130 audit(1696276558.981:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.033050 kernel: raid6: neonx8 gen() 13491 MB/s Oct 2 19:55:59.050038 kernel: raid6: neonx8 xor() 10442 MB/s Oct 2 19:55:59.067051 kernel: raid6: neonx4 gen() 13212 MB/s Oct 2 19:55:59.084033 kernel: raid6: neonx4 xor() 11250 MB/s Oct 2 19:55:59.101028 kernel: raid6: neonx2 gen() 12936 MB/s Oct 2 19:55:59.118027 kernel: raid6: neonx2 xor() 10256 MB/s Oct 2 19:55:59.135028 kernel: raid6: neonx1 gen() 10457 MB/s Oct 2 19:55:59.152024 kernel: raid6: neonx1 xor() 8769 MB/s Oct 2 19:55:59.169031 kernel: raid6: int64x8 gen() 6225 MB/s Oct 2 19:55:59.186028 kernel: raid6: int64x8 xor() 3462 MB/s Oct 2 19:55:59.203028 kernel: raid6: int64x4 gen() 7105 MB/s Oct 2 19:55:59.220027 kernel: raid6: int64x4 xor() 3779 MB/s Oct 2 19:55:59.237028 kernel: raid6: int64x2 gen() 6048 MB/s Oct 2 19:55:59.254026 kernel: raid6: int64x2 xor() 3240 MB/s Oct 2 19:55:59.271028 kernel: raid6: int64x1 gen() 4948 MB/s Oct 2 19:55:59.288238 kernel: raid6: int64x1 xor() 2616 MB/s Oct 2 19:55:59.288255 kernel: raid6: using algorithm neonx8 gen() 13491 MB/s Oct 2 19:55:59.288264 kernel: raid6: .... xor() 10442 MB/s, rmw enabled Oct 2 19:55:59.288273 kernel: raid6: using neon recovery algorithm Oct 2 19:55:59.299025 kernel: xor: measuring software checksum speed Oct 2 19:55:59.300231 kernel: 8regs : 17213 MB/sec Oct 2 19:55:59.300247 kernel: 32regs : 20755 MB/sec Oct 2 19:55:59.301363 kernel: arm64_neon : 27882 MB/sec Oct 2 19:55:59.301377 kernel: xor: using function: arm64_neon (27882 MB/sec) Oct 2 19:55:59.357035 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:55:59.369653 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:55:59.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.372000 audit: BPF prog-id=7 op=LOAD Oct 2 19:55:59.372000 audit: BPF prog-id=8 op=LOAD Oct 2 19:55:59.372851 systemd[1]: Starting systemd-udevd.service... Oct 2 19:55:59.373846 kernel: audit: type=1130 audit(1696276559.370:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.386752 systemd-udevd[493]: Using default interface naming scheme 'v252'. Oct 2 19:55:59.390196 systemd[1]: Started systemd-udevd.service. Oct 2 19:55:59.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.392237 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:55:59.405988 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Oct 2 19:55:59.453531 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:55:59.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.454951 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:55:59.494884 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:55:59.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.531985 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:55:59.535029 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:59.561038 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (546) Oct 2 19:55:59.561508 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:55:59.562288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:55:59.565962 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:55:59.572708 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:55:59.575845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:55:59.577409 systemd[1]: Starting disk-uuid.service... Oct 2 19:55:59.585036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:56:00.599911 disk-uuid[565]: The operation has completed successfully. Oct 2 19:56:00.600883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:56:00.627580 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:56:00.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.627673 systemd[1]: Finished disk-uuid.service. Oct 2 19:56:00.629172 systemd[1]: Starting verity-setup.service... Oct 2 19:56:00.647036 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:56:00.670265 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:56:00.672486 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:56:00.675100 systemd[1]: Finished verity-setup.service. Oct 2 19:56:00.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.735058 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:56:00.735294 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:56:00.736003 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:56:00.736730 systemd[1]: Starting ignition-setup.service... Oct 2 19:56:00.738296 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:56:00.747514 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:56:00.747571 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:56:00.747582 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:56:00.758000 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:56:00.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.767274 systemd[1]: Finished ignition-setup.service. Oct 2 19:56:00.768686 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:56:00.841176 ignition[643]: Ignition 2.14.0 Oct 2 19:56:00.841226 ignition[643]: Stage: fetch-offline Oct 2 19:56:00.841278 ignition[643]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:56:00.841287 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:56:00.841425 ignition[643]: parsed url from cmdline: "" Oct 2 19:56:00.841428 ignition[643]: no config URL provided Oct 2 19:56:00.841432 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:56:00.841439 ignition[643]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:56:00.841457 ignition[643]: op(1): [started] loading QEMU firmware config module Oct 2 19:56:00.841462 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:56:00.848778 ignition[643]: op(1): [finished] loading QEMU firmware config module Oct 2 19:56:00.869230 ignition[643]: parsing config with SHA512: 4e25a064c4e681db3a0bc171deacf1c297fc34751291efa7fc3014ce8539eeb680ad111884f820f114014a7c87a215b54e63c18f183d86ee32e24120fcf7a572 Oct 2 19:56:00.869596 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:56:00.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.871000 audit: BPF prog-id=9 op=LOAD Oct 2 19:56:00.871649 systemd[1]: Starting systemd-networkd.service... Oct 2 19:56:00.888412 unknown[643]: fetched base config from "system" Oct 2 19:56:00.888427 unknown[643]: fetched user config from "qemu" Oct 2 19:56:00.888854 ignition[643]: fetch-offline: fetch-offline passed Oct 2 19:56:00.888937 ignition[643]: Ignition finished successfully Oct 2 19:56:00.894090 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:56:00.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.898133 systemd-networkd[740]: lo: Link UP Oct 2 19:56:00.898141 systemd-networkd[740]: lo: Gained carrier Oct 2 19:56:00.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.898504 systemd-networkd[740]: Enumeration completed Oct 2 19:56:00.898583 systemd[1]: Started systemd-networkd.service. Oct 2 19:56:00.899301 systemd[1]: Reached target network.target. Oct 2 19:56:00.899424 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:56:00.900393 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:56:00.900585 systemd-networkd[740]: eth0: Link UP Oct 2 19:56:00.900589 systemd-networkd[740]: eth0: Gained carrier Oct 2 19:56:00.901158 systemd[1]: Starting ignition-kargs.service... Oct 2 19:56:00.902574 systemd[1]: Starting iscsiuio.service... Oct 2 19:56:00.915137 systemd[1]: Started iscsiuio.service. Oct 2 19:56:00.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.916883 systemd[1]: Starting iscsid.service... Oct 2 19:56:00.917112 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:56:00.920731 iscsid[751]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:56:00.920731 iscsid[751]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:56:00.920731 iscsid[751]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:56:00.920731 iscsid[751]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:56:00.920731 iscsid[751]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:56:00.920731 iscsid[751]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:56:00.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.921322 ignition[742]: Ignition 2.14.0 Oct 2 19:56:00.924397 systemd[1]: Started iscsid.service. Oct 2 19:56:00.921329 ignition[742]: Stage: kargs Oct 2 19:56:00.926681 systemd[1]: Finished ignition-kargs.service. Oct 2 19:56:00.921429 ignition[742]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:56:00.928338 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:56:00.921437 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:56:00.930384 systemd[1]: Starting ignition-disks.service... Oct 2 19:56:00.922313 ignition[742]: kargs: kargs passed Oct 2 19:56:00.922357 ignition[742]: Ignition finished successfully Oct 2 19:56:00.938532 ignition[753]: Ignition 2.14.0 Oct 2 19:56:00.938542 ignition[753]: Stage: disks Oct 2 19:56:00.938634 ignition[753]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:56:00.940209 systemd[1]: Finished ignition-disks.service. Oct 2 19:56:00.938643 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:56:00.941627 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:56:00.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.939421 ignition[753]: disks: disks passed Oct 2 19:56:00.942535 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:56:00.939463 ignition[753]: Ignition finished successfully Oct 2 19:56:00.943861 systemd[1]: Reached target local-fs.target. Oct 2 19:56:00.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.944826 systemd[1]: Reached target sysinit.target. Oct 2 19:56:00.945641 systemd[1]: Reached target basic.target. Oct 2 19:56:00.946922 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:56:00.947656 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:56:00.948568 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:56:00.949748 systemd[1]: Reached target remote-fs.target. Oct 2 19:56:00.951529 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:56:00.960683 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:56:00.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.962124 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:56:00.974888 systemd-fsck[773]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:56:00.978677 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:56:00.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.980151 systemd[1]: Mounting sysroot.mount... Oct 2 19:56:00.989803 systemd[1]: Mounted sysroot.mount. Oct 2 19:56:00.990736 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:56:00.990414 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:56:00.992179 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:56:00.992872 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:56:00.992921 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:56:00.992942 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:56:00.995593 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:56:00.996990 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:56:01.003080 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:56:01.008249 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:56:01.012174 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:56:01.016536 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:56:01.047913 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:56:01.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.049331 systemd[1]: Starting ignition-mount.service... Oct 2 19:56:01.050557 systemd[1]: Starting sysroot-boot.service... Oct 2 19:56:01.056064 bash[824]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:56:01.066344 ignition[826]: INFO : Ignition 2.14.0 Oct 2 19:56:01.066344 ignition[826]: INFO : Stage: mount Oct 2 19:56:01.067532 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:56:01.067532 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:56:01.067532 ignition[826]: INFO : mount: mount passed Oct 2 19:56:01.067532 ignition[826]: INFO : Ignition finished successfully Oct 2 19:56:01.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.069257 systemd[1]: Finished ignition-mount.service. Oct 2 19:56:01.070094 systemd[1]: Finished sysroot-boot.service. Oct 2 19:56:01.694980 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:56:01.714274 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (834) Oct 2 19:56:01.714318 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:56:01.714328 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:56:01.715152 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:56:01.719947 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:56:01.721447 systemd[1]: Starting ignition-files.service... Oct 2 19:56:01.738601 ignition[854]: INFO : Ignition 2.14.0 Oct 2 19:56:01.738601 ignition[854]: INFO : Stage: files Oct 2 19:56:01.739873 ignition[854]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:56:01.739873 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:56:01.739873 ignition[854]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:56:01.750032 ignition[854]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:56:01.751211 ignition[854]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:56:01.753735 ignition[854]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:56:01.754810 ignition[854]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:56:01.756191 unknown[854]: wrote ssh authorized keys file for user: core Oct 2 19:56:01.757086 ignition[854]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:56:01.757086 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:56:01.757086 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Oct 2 19:56:02.039761 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:56:02.293637 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Oct 2 19:56:02.293637 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:56:02.297001 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:56:02.297001 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:56:02.446953 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:56:02.567334 systemd-networkd[740]: eth0: Gained IPv6LL Oct 2 19:56:02.576262 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Oct 2 19:56:02.578304 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:56:02.578304 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:56:02.580982 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:56:02.666609 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:56:03.086223 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Oct 2 19:56:03.086223 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:56:03.086223 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:56:03.086223 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:56:03.132381 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:56:03.832119 ignition[854]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Oct 2 19:56:03.834133 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:56:03.835382 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:56:03.835382 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:56:03.835382 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:56:03.835382 ignition[854]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:56:03.835382 ignition[854]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:56:03.851984 ignition[854]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:56:03.881590 ignition[854]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:56:03.882650 ignition[854]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:56:03.882650 ignition[854]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:56:03.882650 ignition[854]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:56:03.882650 ignition[854]: INFO : files: files passed Oct 2 19:56:03.882650 ignition[854]: INFO : Ignition finished successfully Oct 2 19:56:03.891255 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:56:03.891278 kernel: audit: type=1130 audit(1696276563.884:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.883224 systemd[1]: Finished ignition-files.service. Oct 2 19:56:03.885167 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:56:03.892878 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:56:03.888701 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:56:03.899186 kernel: audit: type=1130 audit(1696276563.894:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.899210 kernel: audit: type=1131 audit(1696276563.894:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.899305 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:56:03.902493 kernel: audit: type=1130 audit(1696276563.899:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.889503 systemd[1]: Starting ignition-quench.service... Oct 2 19:56:03.893359 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:56:03.893444 systemd[1]: Finished ignition-quench.service. Oct 2 19:56:03.895755 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:56:03.899859 systemd[1]: Reached target ignition-complete.target. Oct 2 19:56:03.903779 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:56:03.920262 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:56:03.920359 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:56:03.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.921759 systemd[1]: Reached target initrd-fs.target. Oct 2 19:56:03.926421 kernel: audit: type=1130 audit(1696276563.921:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.926443 kernel: audit: type=1131 audit(1696276563.921:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.925960 systemd[1]: Reached target initrd.target. Oct 2 19:56:03.926928 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:56:03.927757 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:56:03.940945 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:56:03.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.942408 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:56:03.944661 kernel: audit: type=1130 audit(1696276563.941:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.952334 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:56:03.952978 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:56:03.953990 systemd[1]: Stopped target timers.target. Oct 2 19:56:03.954909 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:56:03.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.955023 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:56:03.959479 kernel: audit: type=1131 audit(1696276563.955:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.955929 systemd[1]: Stopped target initrd.target. Oct 2 19:56:03.959102 systemd[1]: Stopped target basic.target. Oct 2 19:56:03.960000 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:56:03.960933 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:56:03.961852 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:56:03.962868 systemd[1]: Stopped target remote-fs.target. Oct 2 19:56:03.963853 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:56:03.964828 systemd[1]: Stopped target sysinit.target. Oct 2 19:56:03.965696 systemd[1]: Stopped target local-fs.target. Oct 2 19:56:03.966585 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:56:03.967489 systemd[1]: Stopped target swap.target. Oct 2 19:56:03.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.968347 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:56:03.968458 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:56:03.969368 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:56:03.971729 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:56:03.976764 kernel: audit: type=1131 audit(1696276563.969:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.976788 kernel: audit: type=1131 audit(1696276563.972:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.971829 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:56:03.972827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:56:03.972929 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:56:03.973769 systemd[1]: Stopped target paths.target. Oct 2 19:56:03.976178 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:56:03.980067 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:56:03.980748 systemd[1]: Stopped target slices.target. Oct 2 19:56:03.981316 systemd[1]: Stopped target sockets.target. Oct 2 19:56:03.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.982325 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:56:03.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.982431 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:56:03.983381 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:56:03.983462 systemd[1]: Stopped ignition-files.service. Oct 2 19:56:03.985122 systemd[1]: Stopping ignition-mount.service... Oct 2 19:56:03.986192 systemd[1]: Stopping iscsid.service... Oct 2 19:56:03.987644 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:56:03.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.989969 iscsid[751]: iscsid shutting down. Oct 2 19:56:03.988508 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:56:03.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.988656 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:56:03.989656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:56:03.989751 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:56:03.992824 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:56:03.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.993061 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:56:03.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.995529 ignition[895]: INFO : Ignition 2.14.0 Oct 2 19:56:03.995529 ignition[895]: INFO : Stage: umount Oct 2 19:56:03.995529 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:56:03.995529 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:56:03.994254 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:56:04.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.000741 ignition[895]: INFO : umount: umount passed Oct 2 19:56:04.000741 ignition[895]: INFO : Ignition finished successfully Oct 2 19:56:04.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.994343 systemd[1]: Stopped iscsid.service. Oct 2 19:56:03.995217 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:56:04.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.995243 systemd[1]: Closed iscsid.socket. Oct 2 19:56:04.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.996004 systemd[1]: Stopping iscsiuio.service... Oct 2 19:56:04.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.999173 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:56:03.999259 systemd[1]: Stopped iscsiuio.service. Oct 2 19:56:04.000397 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:56:04.000470 systemd[1]: Stopped ignition-mount.service. Oct 2 19:56:04.001291 systemd[1]: Stopped target network.target. Oct 2 19:56:04.002157 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:56:04.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.002187 systemd[1]: Closed iscsiuio.socket. Oct 2 19:56:04.003133 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:56:04.003172 systemd[1]: Stopped ignition-disks.service. Oct 2 19:56:04.004174 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:56:04.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.004212 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:56:04.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.005123 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:56:04.005157 systemd[1]: Stopped ignition-setup.service. Oct 2 19:56:04.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.006069 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:56:04.007226 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:56:04.010227 systemd-networkd[740]: eth0: DHCPv6 lease lost Oct 2 19:56:04.010815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:56:04.011644 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:56:04.011730 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:56:04.013142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:56:04.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.013170 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:56:04.015382 systemd[1]: Stopping network-cleanup.service... Oct 2 19:56:04.032000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:56:04.017068 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:56:04.017131 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:56:04.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.018162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:56:04.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.018201 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:56:04.036000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:56:04.020969 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:56:04.021024 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:56:04.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.022453 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:56:04.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.028487 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:56:04.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.028987 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:56:04.029088 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:56:04.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.033488 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:56:04.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.033606 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:56:04.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.034715 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:56:04.034796 systemd[1]: Stopped network-cleanup.service. Oct 2 19:56:04.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.036105 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:56:04.036143 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:56:04.037195 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:56:04.037226 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:56:04.038843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:56:04.038901 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:56:04.039964 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:56:04.039997 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:56:04.042004 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:56:04.042055 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:56:04.044627 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:56:04.045937 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:56:04.045994 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:56:04.047100 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:56:04.047136 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:56:04.047697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:56:04.047731 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:56:04.050406 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:56:04.050961 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:56:04.051045 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:56:04.091733 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:56:04.091825 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:56:04.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.093094 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:56:04.093869 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:56:04.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.093921 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:56:04.095556 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:56:04.102950 systemd[1]: Switching root. Oct 2 19:56:04.119240 systemd-journald[290]: Journal stopped Oct 2 19:56:06.339997 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Oct 2 19:56:06.340130 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:56:06.340144 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:56:06.340167 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:56:06.340178 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:56:06.340187 kernel: SELinux: policy capability open_perms=1 Oct 2 19:56:06.340200 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:56:06.340209 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:56:06.340219 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:56:06.340228 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:56:06.340238 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:56:06.340247 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:56:06.340258 systemd[1]: Successfully loaded SELinux policy in 31.825ms. Oct 2 19:56:06.340272 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.987ms. Oct 2 19:56:06.340286 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:56:06.340298 systemd[1]: Detected virtualization kvm. Oct 2 19:56:06.340308 systemd[1]: Detected architecture arm64. Oct 2 19:56:06.340319 systemd[1]: Detected first boot. Oct 2 19:56:06.340338 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:56:06.340350 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:56:06.340361 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:06.340372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:06.340383 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:06.340395 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:56:06.340405 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:56:06.340417 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:56:06.340428 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:56:06.340438 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:56:06.340449 systemd[1]: Created slice system-getty.slice. Oct 2 19:56:06.340459 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:56:06.340469 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:56:06.340480 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:56:06.340490 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:56:06.340500 systemd[1]: Created slice user.slice. Oct 2 19:56:06.340513 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:56:06.340524 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:56:06.340535 systemd[1]: Set up automount boot.automount. Oct 2 19:56:06.340545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:56:06.340556 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:56:06.340566 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:56:06.340576 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:56:06.340587 systemd[1]: Reached target integritysetup.target. Oct 2 19:56:06.340598 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:56:06.340609 systemd[1]: Reached target remote-fs.target. Oct 2 19:56:06.340619 systemd[1]: Reached target slices.target. Oct 2 19:56:06.340630 systemd[1]: Reached target swap.target. Oct 2 19:56:06.340640 systemd[1]: Reached target torcx.target. Oct 2 19:56:06.340661 systemd[1]: Reached target veritysetup.target. Oct 2 19:56:06.340671 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:56:06.340682 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:56:06.340692 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:56:06.340704 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:56:06.340715 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:56:06.340725 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:56:06.340736 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:56:06.340748 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:56:06.340759 systemd[1]: Mounting media.mount... Oct 2 19:56:06.340769 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:56:06.340779 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:56:06.340790 systemd[1]: Mounting tmp.mount... Oct 2 19:56:06.340800 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:56:06.340812 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:56:06.340822 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:56:06.340833 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:56:06.340843 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:56:06.340853 systemd[1]: Starting modprobe@drm.service... Oct 2 19:56:06.340863 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:56:06.340879 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:56:06.340892 systemd[1]: Starting modprobe@loop.service... Oct 2 19:56:06.340903 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:56:06.340915 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:56:06.340926 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:56:06.340937 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:56:06.340947 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:56:06.340957 systemd[1]: Stopped systemd-journald.service. Oct 2 19:56:06.340967 systemd[1]: Starting systemd-journald.service... Oct 2 19:56:06.340981 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:56:06.340992 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:56:06.341002 kernel: fuse: init (API version 7.34) Oct 2 19:56:06.341020 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:56:06.341046 kernel: loop: module loaded Oct 2 19:56:06.341058 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:56:06.341069 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:56:06.341079 systemd[1]: Stopped verity-setup.service. Oct 2 19:56:06.341089 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:56:06.341099 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:56:06.341109 systemd[1]: Mounted media.mount. Oct 2 19:56:06.341119 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:56:06.341131 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:56:06.341141 systemd[1]: Mounted tmp.mount. Oct 2 19:56:06.341151 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:56:06.341162 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:56:06.341172 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:56:06.341183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:56:06.341193 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:56:06.341203 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:56:06.341214 systemd[1]: Finished modprobe@drm.service. Oct 2 19:56:06.341228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:56:06.341241 systemd-journald[992]: Journal started Oct 2 19:56:06.341285 systemd-journald[992]: Runtime Journal (/run/log/journal/b1c3f4ad3bc64c45ac50969296877504) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:56:04.179000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:56:04.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:56:04.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:56:04.437000 audit: BPF prog-id=10 op=LOAD Oct 2 19:56:04.438000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:56:04.438000 audit: BPF prog-id=11 op=LOAD Oct 2 19:56:04.438000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:56:06.228000 audit: BPF prog-id=12 op=LOAD Oct 2 19:56:06.228000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:56:06.228000 audit: BPF prog-id=13 op=LOAD Oct 2 19:56:06.228000 audit: BPF prog-id=14 op=LOAD Oct 2 19:56:06.228000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:56:06.228000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:56:06.229000 audit: BPF prog-id=15 op=LOAD Oct 2 19:56:06.229000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:56:06.229000 audit: BPF prog-id=16 op=LOAD Oct 2 19:56:06.229000 audit: BPF prog-id=17 op=LOAD Oct 2 19:56:06.229000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:56:06.229000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:56:06.230000 audit: BPF prog-id=18 op=LOAD Oct 2 19:56:06.230000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:56:06.230000 audit: BPF prog-id=19 op=LOAD Oct 2 19:56:06.230000 audit: BPF prog-id=20 op=LOAD Oct 2 19:56:06.230000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:56:06.230000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:56:06.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.236000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:56:06.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.306000 audit: BPF prog-id=21 op=LOAD Oct 2 19:56:06.306000 audit: BPF prog-id=22 op=LOAD Oct 2 19:56:06.306000 audit: BPF prog-id=23 op=LOAD Oct 2 19:56:06.306000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:56:06.306000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:56:06.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.339000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:56:06.339000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffeb1df8d0 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.339000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:56:06.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.227143 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:56:04.497177 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:06.227156 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:56:04.497693 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:56:06.230764 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:56:04.497711 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:56:04.497741 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:56:04.497750 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:56:04.497778 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:56:04.497789 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:56:04.498007 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:56:04.498058 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:56:06.343270 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:56:04.498070 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:56:06.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:04.498441 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:56:04.498474 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:56:04.498491 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:56:04.498504 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:56:04.498520 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:56:04.498534 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:56:05.939144 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:05.939393 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:05.939488 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:05.939638 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:05.939685 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:56:05.939738 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T19:56:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:56:06.344718 systemd[1]: Started systemd-journald.service. Oct 2 19:56:06.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.345458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:56:06.345620 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:56:06.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.346549 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:56:06.346711 systemd[1]: Finished modprobe@loop.service. Oct 2 19:56:06.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.347792 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:56:06.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.349709 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:56:06.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.350765 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:56:06.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.351836 systemd[1]: Reached target network-pre.target. Oct 2 19:56:06.353614 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:56:06.355260 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:56:06.355817 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:56:06.358128 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:56:06.360028 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:56:06.360897 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:56:06.362136 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:56:06.363025 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:56:06.364064 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:56:06.367120 systemd-journald[992]: Time spent on flushing to /var/log/journal/b1c3f4ad3bc64c45ac50969296877504 is 14.588ms for 990 entries. Oct 2 19:56:06.367120 systemd-journald[992]: System Journal (/var/log/journal/b1c3f4ad3bc64c45ac50969296877504) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:56:06.399725 systemd-journald[992]: Received client request to flush runtime journal. Oct 2 19:56:06.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.365831 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:56:06.369261 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:56:06.373412 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:56:06.400349 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:56:06.374181 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:56:06.388575 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:56:06.391260 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:56:06.392231 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:56:06.394056 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:56:06.400822 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:56:06.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.404327 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:56:06.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.418744 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:56:06.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.420586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:56:06.452164 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:56:06.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.753253 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:56:06.754000 audit: BPF prog-id=24 op=LOAD Oct 2 19:56:06.754000 audit: BPF prog-id=25 op=LOAD Oct 2 19:56:06.754000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:56:06.754000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:56:06.755319 systemd[1]: Starting systemd-udevd.service... Oct 2 19:56:06.776606 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Oct 2 19:56:06.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.790000 audit: BPF prog-id=26 op=LOAD Oct 2 19:56:06.788699 systemd[1]: Started systemd-udevd.service. Oct 2 19:56:06.791078 systemd[1]: Starting systemd-networkd.service... Oct 2 19:56:06.798000 audit: BPF prog-id=27 op=LOAD Oct 2 19:56:06.798000 audit: BPF prog-id=28 op=LOAD Oct 2 19:56:06.798000 audit: BPF prog-id=29 op=LOAD Oct 2 19:56:06.799137 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:56:06.825679 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:56:06.846063 systemd[1]: Started systemd-userdbd.service. Oct 2 19:56:06.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.872887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:56:06.910494 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:56:06.911141 systemd-networkd[1042]: lo: Link UP Oct 2 19:56:06.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.911428 systemd-networkd[1042]: lo: Gained carrier Oct 2 19:56:06.911845 systemd-networkd[1042]: Enumeration completed Oct 2 19:56:06.912341 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:56:06.912354 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:56:06.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.913035 systemd[1]: Started systemd-networkd.service. Oct 2 19:56:06.915495 systemd-networkd[1042]: eth0: Link UP Oct 2 19:56:06.915591 systemd-networkd[1042]: eth0: Gained carrier Oct 2 19:56:06.927217 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:56:06.945142 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:56:06.961930 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:56:06.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.962738 systemd[1]: Reached target cryptsetup.target. Oct 2 19:56:06.964469 systemd[1]: Starting lvm2-activation.service... Oct 2 19:56:06.968728 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:56:06.999995 systemd[1]: Finished lvm2-activation.service. Oct 2 19:56:07.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.000736 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:56:07.001361 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:56:07.001390 systemd[1]: Reached target local-fs.target. Oct 2 19:56:07.001919 systemd[1]: Reached target machines.target. Oct 2 19:56:07.003691 systemd[1]: Starting ldconfig.service... Oct 2 19:56:07.004604 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:56:07.004665 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:07.005843 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:56:07.007495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:56:07.009483 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:56:07.011073 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:56:07.011146 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:56:07.012561 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:56:07.015499 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Oct 2 19:56:07.016908 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:56:07.022852 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:56:07.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.029085 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:56:07.030556 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:56:07.031671 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:56:07.101005 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:56:07.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.114367 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Oct 2 19:56:07.114367 systemd-fsck[1079]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:56:07.116249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:56:07.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.211658 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:56:07.215029 systemd[1]: Finished ldconfig.service. Oct 2 19:56:07.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.323521 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:56:07.324918 systemd[1]: Mounting boot.mount... Oct 2 19:56:07.332677 systemd[1]: Mounted boot.mount. Oct 2 19:56:07.342359 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:56:07.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.395857 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:56:07.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.398065 systemd[1]: Starting audit-rules.service... Oct 2 19:56:07.399891 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:56:07.401605 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:56:07.403000 audit: BPF prog-id=30 op=LOAD Oct 2 19:56:07.404497 systemd[1]: Starting systemd-resolved.service... Oct 2 19:56:07.405000 audit: BPF prog-id=31 op=LOAD Oct 2 19:56:07.406908 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:56:07.410335 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:56:07.411545 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:56:07.412965 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:56:07.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.418000 audit[1093]: SYSTEM_BOOT pid=1093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.425344 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:56:07.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.427244 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:56:07.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.429252 systemd[1]: Starting systemd-update-done.service... Oct 2 19:56:07.438397 systemd[1]: Finished systemd-update-done.service. Oct 2 19:56:07.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.447800 augenrules[1103]: No rules Oct 2 19:56:07.447000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:56:07.447000 audit[1103]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1420830 a2=420 a3=0 items=0 ppid=1082 pid=1103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.447000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:56:07.448541 systemd[1]: Finished audit-rules.service. Oct 2 19:56:07.456768 systemd-resolved[1086]: Positive Trust Anchors: Oct 2 19:56:07.456779 systemd-resolved[1086]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:56:07.456806 systemd-resolved[1086]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:56:07.460825 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:56:07.461896 systemd[1]: Reached target time-set.target. Oct 2 19:56:07.462052 systemd-timesyncd[1089]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:56:07.462305 systemd-timesyncd[1089]: Initial clock synchronization to Mon 2023-10-02 19:56:07.123815 UTC. Oct 2 19:56:07.466908 systemd-resolved[1086]: Defaulting to hostname 'linux'. Oct 2 19:56:07.468304 systemd[1]: Started systemd-resolved.service. Oct 2 19:56:07.469034 systemd[1]: Reached target network.target. Oct 2 19:56:07.469583 systemd[1]: Reached target nss-lookup.target. Oct 2 19:56:07.470147 systemd[1]: Reached target sysinit.target. Oct 2 19:56:07.470913 systemd[1]: Started motdgen.path. Oct 2 19:56:07.471478 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:56:07.472404 systemd[1]: Started logrotate.timer. Oct 2 19:56:07.473056 systemd[1]: Started mdadm.timer. Oct 2 19:56:07.473610 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:56:07.474763 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:56:07.474831 systemd[1]: Reached target paths.target. Oct 2 19:56:07.475847 systemd[1]: Reached target timers.target. Oct 2 19:56:07.476784 systemd[1]: Listening on dbus.socket. Oct 2 19:56:07.478357 systemd[1]: Starting docker.socket... Oct 2 19:56:07.481724 systemd[1]: Listening on sshd.socket. Oct 2 19:56:07.482391 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:07.482934 systemd[1]: Listening on docker.socket. Oct 2 19:56:07.483741 systemd[1]: Reached target sockets.target. Oct 2 19:56:07.484481 systemd[1]: Reached target basic.target. Oct 2 19:56:07.485159 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:56:07.485190 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:56:07.486235 systemd[1]: Starting containerd.service... Oct 2 19:56:07.487763 systemd[1]: Starting dbus.service... Oct 2 19:56:07.489314 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:56:07.491553 systemd[1]: Starting extend-filesystems.service... Oct 2 19:56:07.492384 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:56:07.493496 systemd[1]: Starting motdgen.service... Oct 2 19:56:07.499518 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:56:07.501755 jq[1113]: false Oct 2 19:56:07.504185 systemd[1]: Starting prepare-critools.service... Oct 2 19:56:07.506779 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:56:07.508602 systemd[1]: Starting sshd-keygen.service... Oct 2 19:56:07.511150 systemd[1]: Starting systemd-logind.service... Oct 2 19:56:07.511707 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:07.511785 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:56:07.512279 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:56:07.513025 systemd[1]: Starting update-engine.service... Oct 2 19:56:07.514548 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:56:07.517218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:56:07.517384 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:56:07.517779 jq[1128]: true Oct 2 19:56:07.519009 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:56:07.519208 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:56:07.526582 tar[1132]: crictl Oct 2 19:56:07.526802 tar[1131]: ./ Oct 2 19:56:07.526802 tar[1131]: ./loopback Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda1 Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda2 Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda3 Oct 2 19:56:07.527964 extend-filesystems[1114]: Found usr Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda4 Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda6 Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda7 Oct 2 19:56:07.527964 extend-filesystems[1114]: Found vda9 Oct 2 19:56:07.527964 extend-filesystems[1114]: Checking size of /dev/vda9 Oct 2 19:56:07.548564 jq[1133]: true Oct 2 19:56:07.548229 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:56:07.548390 systemd[1]: Finished motdgen.service. Oct 2 19:56:07.561573 dbus-daemon[1112]: [system] SELinux support is enabled Oct 2 19:56:07.561742 systemd[1]: Started dbus.service. Oct 2 19:56:07.563923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:56:07.563951 systemd[1]: Reached target system-config.target. Oct 2 19:56:07.564978 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:56:07.566328 extend-filesystems[1114]: Old size kept for /dev/vda9 Oct 2 19:56:07.565004 systemd[1]: Reached target user-config.target. Oct 2 19:56:07.566941 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:56:07.569488 systemd[1]: Finished extend-filesystems.service. Oct 2 19:56:07.602358 tar[1131]: ./bandwidth Oct 2 19:56:07.604372 systemd-logind[1125]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:56:07.604556 systemd-logind[1125]: New seat seat0. Oct 2 19:56:07.612051 systemd[1]: Started systemd-logind.service. Oct 2 19:56:07.634196 bash[1165]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:56:07.635202 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:56:07.650446 tar[1131]: ./ptp Oct 2 19:56:07.651958 env[1137]: time="2023-10-02T19:56:07.651902800Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:56:07.685937 env[1137]: time="2023-10-02T19:56:07.685881680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:56:07.686086 update_engine[1127]: I1002 19:56:07.680004 1127 main.cc:92] Flatcar Update Engine starting Oct 2 19:56:07.686350 env[1137]: time="2023-10-02T19:56:07.686283560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689186 env[1137]: time="2023-10-02T19:56:07.689144560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689186 env[1137]: time="2023-10-02T19:56:07.689180840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689435 env[1137]: time="2023-10-02T19:56:07.689407520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689435 env[1137]: time="2023-10-02T19:56:07.689430880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689493 env[1137]: time="2023-10-02T19:56:07.689445960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:56:07.689493 env[1137]: time="2023-10-02T19:56:07.689458240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689562 env[1137]: time="2023-10-02T19:56:07.689543760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:07.689933 env[1137]: time="2023-10-02T19:56:07.689909280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:07.690088 env[1137]: time="2023-10-02T19:56:07.690065480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:07.690125 env[1137]: time="2023-10-02T19:56:07.690087280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:56:07.690157 env[1137]: time="2023-10-02T19:56:07.690148080Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:56:07.690185 env[1137]: time="2023-10-02T19:56:07.690161160Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:56:07.694306 systemd[1]: Started update-engine.service. Oct 2 19:56:07.696792 systemd[1]: Started locksmithd.service. Oct 2 19:56:07.697998 update_engine[1127]: I1002 19:56:07.697963 1127 update_check_scheduler.cc:74] Next update check in 6m55s Oct 2 19:56:07.702138 env[1137]: time="2023-10-02T19:56:07.702095080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:56:07.702138 env[1137]: time="2023-10-02T19:56:07.702141160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:56:07.702229 env[1137]: time="2023-10-02T19:56:07.702154920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:56:07.702229 env[1137]: time="2023-10-02T19:56:07.702189000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702229 env[1137]: time="2023-10-02T19:56:07.702210920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702229 env[1137]: time="2023-10-02T19:56:07.702225480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702328 env[1137]: time="2023-10-02T19:56:07.702238840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702729 env[1137]: time="2023-10-02T19:56:07.702699960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702770 env[1137]: time="2023-10-02T19:56:07.702731800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702770 env[1137]: time="2023-10-02T19:56:07.702746280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702770 env[1137]: time="2023-10-02T19:56:07.702758760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.702834 env[1137]: time="2023-10-02T19:56:07.702771520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:56:07.702942 env[1137]: time="2023-10-02T19:56:07.702915720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:56:07.703035 env[1137]: time="2023-10-02T19:56:07.703000560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:56:07.703278 env[1137]: time="2023-10-02T19:56:07.703256040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:56:07.703317 env[1137]: time="2023-10-02T19:56:07.703286960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703317 env[1137]: time="2023-10-02T19:56:07.703306200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:56:07.703434 env[1137]: time="2023-10-02T19:56:07.703417720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703474 env[1137]: time="2023-10-02T19:56:07.703433480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703474 env[1137]: time="2023-10-02T19:56:07.703448040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703474 env[1137]: time="2023-10-02T19:56:07.703459360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703548 env[1137]: time="2023-10-02T19:56:07.703475080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703548 env[1137]: time="2023-10-02T19:56:07.703491920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703548 env[1137]: time="2023-10-02T19:56:07.703504680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703548 env[1137]: time="2023-10-02T19:56:07.703515440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703548 env[1137]: time="2023-10-02T19:56:07.703528000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:56:07.703657 env[1137]: time="2023-10-02T19:56:07.703644800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703683 env[1137]: time="2023-10-02T19:56:07.703664680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703683 env[1137]: time="2023-10-02T19:56:07.703677440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.703723 env[1137]: time="2023-10-02T19:56:07.703688840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:56:07.703723 env[1137]: time="2023-10-02T19:56:07.703704040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:56:07.703723 env[1137]: time="2023-10-02T19:56:07.703716480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:56:07.703785 env[1137]: time="2023-10-02T19:56:07.703733120Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:56:07.703785 env[1137]: time="2023-10-02T19:56:07.703765720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:56:07.704032 env[1137]: time="2023-10-02T19:56:07.703975040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:56:07.706501 env[1137]: time="2023-10-02T19:56:07.704045480Z" level=info msg="Connect containerd service" Oct 2 19:56:07.706501 env[1137]: time="2023-10-02T19:56:07.704115200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:56:07.706501 env[1137]: time="2023-10-02T19:56:07.704688480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:56:07.706501 env[1137]: time="2023-10-02T19:56:07.705066320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:56:07.706501 env[1137]: time="2023-10-02T19:56:07.705105840Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:56:07.706501 env[1137]: time="2023-10-02T19:56:07.705149000Z" level=info msg="containerd successfully booted in 0.054406s" Oct 2 19:56:07.705249 systemd[1]: Started containerd.service. Oct 2 19:56:07.710965 tar[1131]: ./vlan Oct 2 19:56:07.712206 env[1137]: time="2023-10-02T19:56:07.712157520Z" level=info msg="Start subscribing containerd event" Oct 2 19:56:07.712265 env[1137]: time="2023-10-02T19:56:07.712217760Z" level=info msg="Start recovering state" Oct 2 19:56:07.712496 env[1137]: time="2023-10-02T19:56:07.712471320Z" level=info msg="Start event monitor" Oct 2 19:56:07.712604 env[1137]: time="2023-10-02T19:56:07.712572080Z" level=info msg="Start snapshots syncer" Oct 2 19:56:07.712634 env[1137]: time="2023-10-02T19:56:07.712603920Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:56:07.712634 env[1137]: time="2023-10-02T19:56:07.712612480Z" level=info msg="Start streaming server" Oct 2 19:56:07.743316 tar[1131]: ./host-device Oct 2 19:56:07.777330 tar[1131]: ./tuning Oct 2 19:56:07.803908 locksmithd[1171]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:56:07.808204 tar[1131]: ./vrf Oct 2 19:56:07.833929 tar[1131]: ./sbr Oct 2 19:56:07.859258 tar[1131]: ./tap Oct 2 19:56:07.888491 tar[1131]: ./dhcp Oct 2 19:56:07.959863 tar[1131]: ./static Oct 2 19:56:07.980804 tar[1131]: ./firewall Oct 2 19:56:08.013719 tar[1131]: ./macvlan Oct 2 19:56:08.015418 systemd[1]: Finished prepare-critools.service. Oct 2 19:56:08.042915 tar[1131]: ./dummy Oct 2 19:56:08.070528 tar[1131]: ./bridge Oct 2 19:56:08.100674 tar[1131]: ./ipvlan Oct 2 19:56:08.128153 tar[1131]: ./portmap Oct 2 19:56:08.154286 tar[1131]: ./host-local Oct 2 19:56:08.188192 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:56:08.832622 sshd_keygen[1134]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:56:08.850935 systemd[1]: Finished sshd-keygen.service. Oct 2 19:56:08.852890 systemd[1]: Starting issuegen.service... Oct 2 19:56:08.857817 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:56:08.857955 systemd[1]: Finished issuegen.service. Oct 2 19:56:08.859857 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:56:08.867937 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:56:08.870333 systemd[1]: Started getty@tty1.service. Oct 2 19:56:08.871952 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:56:08.872749 systemd[1]: Reached target getty.target. Oct 2 19:56:08.873361 systemd[1]: Reached target multi-user.target. Oct 2 19:56:08.874922 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:56:08.882054 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:56:08.882200 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:56:08.882933 systemd[1]: Startup finished in 584ms (kernel) + 5.564s (initrd) + 4.742s (userspace) = 10.890s. Oct 2 19:56:08.948844 systemd[1]: Created slice system-sshd.slice. Oct 2 19:56:08.950175 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:44040.service. Oct 2 19:56:08.968271 systemd-networkd[1042]: eth0: Gained IPv6LL Oct 2 19:56:09.004211 sshd[1194]: Accepted publickey for core from 10.0.0.1 port 44040 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.006133 sshd[1194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.017082 systemd[1]: Created slice user-500.slice. Oct 2 19:56:09.018178 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:56:09.019787 systemd-logind[1125]: New session 1 of user core. Oct 2 19:56:09.026063 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:56:09.027261 systemd[1]: Starting user@500.service... Oct 2 19:56:09.032194 (systemd)[1197]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.096813 systemd[1197]: Queued start job for default target default.target. Oct 2 19:56:09.097307 systemd[1197]: Reached target paths.target. Oct 2 19:56:09.097326 systemd[1197]: Reached target sockets.target. Oct 2 19:56:09.097336 systemd[1197]: Reached target timers.target. Oct 2 19:56:09.097346 systemd[1197]: Reached target basic.target. Oct 2 19:56:09.097393 systemd[1197]: Reached target default.target. Oct 2 19:56:09.097414 systemd[1197]: Startup finished in 59ms. Oct 2 19:56:09.097906 systemd[1]: Started user@500.service. Oct 2 19:56:09.098801 systemd[1]: Started session-1.scope. Oct 2 19:56:09.151135 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:44054.service. Oct 2 19:56:09.188295 sshd[1206]: Accepted publickey for core from 10.0.0.1 port 44054 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.189882 sshd[1206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.194761 systemd-logind[1125]: New session 2 of user core. Oct 2 19:56:09.194913 systemd[1]: Started session-2.scope. Oct 2 19:56:09.257661 sshd[1206]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:09.262376 systemd-logind[1125]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:56:09.262566 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:44054.service: Deactivated successfully. Oct 2 19:56:09.263152 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:56:09.264804 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:44066.service. Oct 2 19:56:09.265713 systemd-logind[1125]: Removed session 2. Oct 2 19:56:09.298689 sshd[1212]: Accepted publickey for core from 10.0.0.1 port 44066 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.300028 sshd[1212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.303171 systemd-logind[1125]: New session 3 of user core. Oct 2 19:56:09.303904 systemd[1]: Started session-3.scope. Oct 2 19:56:09.354198 sshd[1212]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:09.357359 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:44066.service: Deactivated successfully. Oct 2 19:56:09.357887 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:56:09.358341 systemd-logind[1125]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:56:09.359295 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:44070.service. Oct 2 19:56:09.359760 systemd-logind[1125]: Removed session 3. Oct 2 19:56:09.393323 sshd[1218]: Accepted publickey for core from 10.0.0.1 port 44070 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.394818 sshd[1218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.397800 systemd-logind[1125]: New session 4 of user core. Oct 2 19:56:09.398569 systemd[1]: Started session-4.scope. Oct 2 19:56:09.452968 sshd[1218]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:09.455513 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:44070.service: Deactivated successfully. Oct 2 19:56:09.456101 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:56:09.456827 systemd-logind[1125]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:56:09.457782 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:44076.service. Oct 2 19:56:09.458504 systemd-logind[1125]: Removed session 4. Oct 2 19:56:09.492119 sshd[1224]: Accepted publickey for core from 10.0.0.1 port 44076 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.493388 sshd[1224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.497221 systemd-logind[1125]: New session 5 of user core. Oct 2 19:56:09.497434 systemd[1]: Started session-5.scope. Oct 2 19:56:09.555082 sudo[1227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:56:09.555272 sudo[1227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:09.567218 dbus-daemon[1112]: avc: received setenforce notice (enforcing=1) Oct 2 19:56:09.568080 sudo[1227]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:09.570245 sshd[1224]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:09.573684 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:44076.service: Deactivated successfully. Oct 2 19:56:09.574289 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:56:09.574807 systemd-logind[1125]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:56:09.575800 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:44080.service. Oct 2 19:56:09.576518 systemd-logind[1125]: Removed session 5. Oct 2 19:56:09.610806 sshd[1231]: Accepted publickey for core from 10.0.0.1 port 44080 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.612491 sshd[1231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.616967 systemd-logind[1125]: New session 6 of user core. Oct 2 19:56:09.617730 systemd[1]: Started session-6.scope. Oct 2 19:56:09.671924 sudo[1235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:56:09.672141 sudo[1235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:09.674763 sudo[1235]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:09.681525 sudo[1234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:56:09.681723 sudo[1234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:09.690649 systemd[1]: Stopping audit-rules.service... Oct 2 19:56:09.690000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:09.692493 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 19:56:09.692532 kernel: audit: type=1305 audit(1696276569.690:169): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:09.692784 auditctl[1238]: No rules Oct 2 19:56:09.692970 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:56:09.693126 systemd[1]: Stopped audit-rules.service. Oct 2 19:56:09.690000 audit[1238]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdeb86560 a2=420 a3=0 items=0 ppid=1 pid=1238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.694417 systemd[1]: Starting audit-rules.service... Oct 2 19:56:09.696601 kernel: audit: type=1300 audit(1696276569.690:169): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdeb86560 a2=420 a3=0 items=0 ppid=1 pid=1238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.696686 kernel: audit: type=1327 audit(1696276569.690:169): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:09.690000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:09.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.698690 kernel: audit: type=1131 audit(1696276569.692:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.714632 augenrules[1255]: No rules Oct 2 19:56:09.715304 systemd[1]: Finished audit-rules.service. Oct 2 19:56:09.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.716510 sudo[1234]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:09.715000 audit[1234]: USER_END pid=1234 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.720206 kernel: audit: type=1130 audit(1696276569.714:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.720747 kernel: audit: type=1106 audit(1696276569.715:172): pid=1234 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.720771 kernel: audit: type=1104 audit(1696276569.715:173): pid=1234 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.715000 audit[1234]: CRED_DISP pid=1234 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.720990 sshd[1231]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:09.722000 audit[1231]: USER_END pid=1231 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.723764 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:44094.service. Oct 2 19:56:09.722000 audit[1231]: CRED_DISP pid=1231 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.724620 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:44080.service: Deactivated successfully. Oct 2 19:56:09.725186 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:56:09.726553 systemd-logind[1125]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:56:09.727456 systemd-logind[1125]: Removed session 6. Oct 2 19:56:09.727806 kernel: audit: type=1106 audit(1696276569.722:174): pid=1231 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.727838 kernel: audit: type=1104 audit(1696276569.722:175): pid=1231 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.727854 kernel: audit: type=1130 audit(1696276569.722:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:44080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.758000 audit[1260]: USER_ACCT pid=1260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.759199 sshd[1260]: Accepted publickey for core from 10.0.0.1 port 44094 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:09.759000 audit[1260]: CRED_ACQ pid=1260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.759000 audit[1260]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3ceac70 a2=3 a3=1 items=0 ppid=1 pid=1260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.759000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:56:09.760465 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:09.766338 systemd-logind[1125]: New session 7 of user core. Oct 2 19:56:09.766984 systemd[1]: Started session-7.scope. Oct 2 19:56:09.771000 audit[1260]: USER_START pid=1260 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.773000 audit[1263]: CRED_ACQ pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:09.819000 audit[1264]: USER_ACCT pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.819000 audit[1264]: CRED_REFR pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:09.820739 sudo[1264]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:56:09.820935 sudo[1264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:09.821000 audit[1264]: USER_START pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:10.349423 systemd[1]: Reloading. Oct 2 19:56:10.407080 /usr/lib/systemd/system-generators/torcx-generator[1294]: time="2023-10-02T19:56:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:10.407107 /usr/lib/systemd/system-generators/torcx-generator[1294]: time="2023-10-02T19:56:10Z" level=info msg="torcx already run" Oct 2 19:56:10.465751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:10.465790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:10.483096 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit: BPF prog-id=37 op=LOAD Oct 2 19:56:10.526000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.526000 audit: BPF prog-id=38 op=LOAD Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit: BPF prog-id=39 op=LOAD Oct 2 19:56:10.527000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:56:10.527000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit: BPF prog-id=40 op=LOAD Oct 2 19:56:10.527000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit: BPF prog-id=41 op=LOAD Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.527000 audit: BPF prog-id=42 op=LOAD Oct 2 19:56:10.527000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:56:10.527000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.528000 audit: BPF prog-id=43 op=LOAD Oct 2 19:56:10.528000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.529000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.530000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.530000 audit: BPF prog-id=44 op=LOAD Oct 2 19:56:10.530000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit: BPF prog-id=45 op=LOAD Oct 2 19:56:10.531000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit: BPF prog-id=46 op=LOAD Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.531000 audit: BPF prog-id=47 op=LOAD Oct 2 19:56:10.531000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:56:10.531000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit: BPF prog-id=48 op=LOAD Oct 2 19:56:10.532000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit: BPF prog-id=49 op=LOAD Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.533000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.533000 audit: BPF prog-id=50 op=LOAD Oct 2 19:56:10.533000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:56:10.533000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:10.534000 audit: BPF prog-id=51 op=LOAD Oct 2 19:56:10.534000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:56:10.541871 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:56:12.168770 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:56:12.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.169239 systemd[1]: Reached target network-online.target. Oct 2 19:56:12.173861 systemd[1]: Started kubelet.service. Oct 2 19:56:12.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.184548 systemd[1]: Starting coreos-metadata.service... Oct 2 19:56:12.192960 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:56:12.193141 systemd[1]: Finished coreos-metadata.service. Oct 2 19:56:12.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.309062 kubelet[1332]: E1002 19:56:12.308969 1332 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:56:12.312425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:56:12.312556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:56:12.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:56:12.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.472822 systemd[1]: Stopped kubelet.service. Oct 2 19:56:12.491549 systemd[1]: Reloading. Oct 2 19:56:12.542752 /usr/lib/systemd/system-generators/torcx-generator[1400]: time="2023-10-02T19:56:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:12.542777 /usr/lib/systemd/system-generators/torcx-generator[1400]: time="2023-10-02T19:56:12Z" level=info msg="torcx already run" Oct 2 19:56:12.600453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:12.600605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:12.617797 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit: BPF prog-id=52 op=LOAD Oct 2 19:56:12.660000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.660000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit: BPF prog-id=53 op=LOAD Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.661000 audit: BPF prog-id=54 op=LOAD Oct 2 19:56:12.661000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:56:12.661000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.662000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit: BPF prog-id=55 op=LOAD Oct 2 19:56:12.663000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit: BPF prog-id=56 op=LOAD Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.663000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit: BPF prog-id=57 op=LOAD Oct 2 19:56:12.664000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:56:12.664000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.664000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.665000 audit: BPF prog-id=58 op=LOAD Oct 2 19:56:12.665000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit: BPF prog-id=59 op=LOAD Oct 2 19:56:12.667000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.667000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit: BPF prog-id=60 op=LOAD Oct 2 19:56:12.668000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit: BPF prog-id=61 op=LOAD Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.669000 audit: BPF prog-id=62 op=LOAD Oct 2 19:56:12.669000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:56:12.669000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.670000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit: BPF prog-id=63 op=LOAD Oct 2 19:56:12.671000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit: BPF prog-id=64 op=LOAD Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.671000 audit: BPF prog-id=65 op=LOAD Oct 2 19:56:12.671000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:56:12.671000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:12.673000 audit: BPF prog-id=66 op=LOAD Oct 2 19:56:12.673000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:56:12.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:12.684602 systemd[1]: Started kubelet.service. Oct 2 19:56:12.728952 kubelet[1437]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:12.728952 kubelet[1437]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:12.728952 kubelet[1437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:12.733709 kubelet[1437]: I1002 19:56:12.733622 1437 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:56:13.156929 kubelet[1437]: I1002 19:56:13.156833 1437 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 19:56:13.156929 kubelet[1437]: I1002 19:56:13.156864 1437 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:56:13.157123 kubelet[1437]: I1002 19:56:13.157109 1437 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 19:56:13.160460 kubelet[1437]: I1002 19:56:13.160327 1437 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:56:13.162044 kubelet[1437]: W1002 19:56:13.162025 1437 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:56:13.162673 kubelet[1437]: I1002 19:56:13.162661 1437 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:56:13.162873 kubelet[1437]: I1002 19:56:13.162862 1437 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:56:13.163236 kubelet[1437]: I1002 19:56:13.162922 1437 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:56:13.163317 kubelet[1437]: I1002 19:56:13.163257 1437 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:56:13.163317 kubelet[1437]: I1002 19:56:13.163272 1437 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 19:56:13.163414 kubelet[1437]: I1002 19:56:13.163401 1437 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:13.169132 kubelet[1437]: I1002 19:56:13.169105 1437 kubelet.go:405] "Attempting to sync node with API server" Oct 2 19:56:13.169132 kubelet[1437]: I1002 19:56:13.169136 1437 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:56:13.169255 kubelet[1437]: I1002 19:56:13.169171 1437 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:56:13.169255 kubelet[1437]: I1002 19:56:13.169183 1437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:56:13.169785 kubelet[1437]: E1002 19:56:13.169763 1437 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:13.169859 kubelet[1437]: E1002 19:56:13.169814 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:13.170424 kubelet[1437]: I1002 19:56:13.170390 1437 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:56:13.170918 kubelet[1437]: W1002 19:56:13.170886 1437 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:56:13.171615 kubelet[1437]: I1002 19:56:13.171590 1437 server.go:1168] "Started kubelet" Oct 2 19:56:13.171964 kubelet[1437]: I1002 19:56:13.171938 1437 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:56:13.172126 kubelet[1437]: I1002 19:56:13.172099 1437 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:56:13.172818 kubelet[1437]: I1002 19:56:13.172801 1437 server.go:461] "Adding debug handlers to kubelet server" Oct 2 19:56:13.173222 kubelet[1437]: E1002 19:56:13.173195 1437 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:56:13.173222 kubelet[1437]: E1002 19:56:13.173226 1437 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:56:13.172000 audit[1437]: AVC avc: denied { mac_admin } for pid=1437 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:13.172000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:13.172000 audit[1437]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400003ce70 a1=4000cb6978 a2=400003ce40 a3=25 items=0 ppid=1 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.172000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:13.172000 audit[1437]: AVC avc: denied { mac_admin } for pid=1437 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:13.172000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:13.172000 audit[1437]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000121c00 a1=4000cb6990 a2=400003cf00 a3=25 items=0 ppid=1 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.172000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:13.174508 kubelet[1437]: I1002 19:56:13.173720 1437 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:56:13.174508 kubelet[1437]: I1002 19:56:13.173764 1437 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:56:13.174508 kubelet[1437]: I1002 19:56:13.173822 1437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:56:13.174674 kubelet[1437]: I1002 19:56:13.174590 1437 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 19:56:13.174674 kubelet[1437]: I1002 19:56:13.174665 1437 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 19:56:13.188201 kubelet[1437]: W1002 19:56:13.188170 1437 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:13.188426 kubelet[1437]: E1002 19:56:13.188413 1437 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:13.188508 kubelet[1437]: W1002 19:56:13.188171 1437 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:13.188571 kubelet[1437]: E1002 19:56:13.188554 1437 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:13.188708 kubelet[1437]: E1002 19:56:13.188214 1437 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:56:13.188775 kubelet[1437]: W1002 19:56:13.188280 1437 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:13.188860 kubelet[1437]: E1002 19:56:13.188849 1437 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:13.188918 kubelet[1437]: E1002 19:56:13.188313 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e58b53ea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 171564195, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 171564195, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.190750 kubelet[1437]: E1002 19:56:13.190682 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e58ce754e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 173216590, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 173216590, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.195495 kubelet[1437]: I1002 19:56:13.195471 1437 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:56:13.195495 kubelet[1437]: I1002 19:56:13.195490 1437 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:56:13.195595 kubelet[1437]: I1002 19:56:13.195510 1437 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:13.196120 kubelet[1437]: E1002 19:56:13.196053 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19465f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194896991, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194896991, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.196968 kubelet[1437]: E1002 19:56:13.196910 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a199987", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194918279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194918279, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.198328 kubelet[1437]: E1002 19:56:13.198080 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19a797", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194921879, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194921879, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.198000 audit[1450]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.198000 audit[1450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff66b1640 a2=0 a3=1 items=0 ppid=1437 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:13.201000 audit[1456]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.201000 audit[1456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe7d23b60 a2=0 a3=1 items=0 ppid=1437 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:13.277728 kubelet[1437]: I1002 19:56:13.277699 1437 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:56:13.278995 kubelet[1437]: E1002 19:56:13.278920 1437 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:56:13.279122 kubelet[1437]: E1002 19:56:13.279058 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19465f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194896991, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 277655234, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a19465f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.280104 kubelet[1437]: E1002 19:56:13.280041 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a199987", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194918279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 277666739, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a199987" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.281526 kubelet[1437]: E1002 19:56:13.281465 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19a797", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194921879, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 277669439, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a19a797" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.203000 audit[1458]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.203000 audit[1458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffe34ace0 a2=0 a3=1 items=0 ppid=1437 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:13.326000 audit[1463]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.326000 audit[1463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffec1a4420 a2=0 a3=1 items=0 ppid=1437 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:13.354000 audit[1468]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.354000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe724c460 a2=0 a3=1 items=0 ppid=1437 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:56:13.355326 kubelet[1437]: I1002 19:56:13.355303 1437 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:56:13.355000 audit[1469]: NETFILTER_CFG table=mangle:7 family=2 entries=1 op=nft_register_chain pid=1469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.355000 audit[1469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff159f3e0 a2=0 a3=1 items=0 ppid=1437 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:13.355000 audit[1470]: NETFILTER_CFG table=mangle:8 family=10 entries=2 op=nft_register_chain pid=1470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:13.355000 audit[1470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff4d96f70 a2=0 a3=1 items=0 ppid=1437 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:13.356647 kubelet[1437]: I1002 19:56:13.356556 1437 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:56:13.356647 kubelet[1437]: I1002 19:56:13.356589 1437 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 19:56:13.356647 kubelet[1437]: I1002 19:56:13.356610 1437 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 19:56:13.356723 kubelet[1437]: E1002 19:56:13.356655 1437 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 2 19:56:13.356000 audit[1471]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:13.356000 audit[1471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec666130 a2=0 a3=1 items=0 ppid=1437 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:13.357000 audit[1473]: NETFILTER_CFG table=nat:10 family=10 entries=2 op=nft_register_chain pid=1473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:13.357000 audit[1473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffcdb3ca40 a2=0 a3=1 items=0 ppid=1437 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:13.359100 kubelet[1437]: W1002 19:56:13.359076 1437 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:13.359151 kubelet[1437]: E1002 19:56:13.359104 1437 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:13.358000 audit[1474]: NETFILTER_CFG table=nat:11 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.358000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffdb391110 a2=0 a3=1 items=0 ppid=1437 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:13.359000 audit[1475]: NETFILTER_CFG table=filter:12 family=10 entries=2 op=nft_register_chain pid=1475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:13.359000 audit[1475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeced3d90 a2=0 a3=1 items=0 ppid=1437 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:13.359000 audit[1476]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:13.359000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc63efff0 a2=0 a3=1 items=0 ppid=1437 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:13.391101 kubelet[1437]: E1002 19:56:13.391073 1437 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:56:13.436803 kubelet[1437]: I1002 19:56:13.436695 1437 policy_none.go:49] "None policy: Start" Oct 2 19:56:13.439398 kubelet[1437]: I1002 19:56:13.439368 1437 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:56:13.439398 kubelet[1437]: I1002 19:56:13.439407 1437 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:56:13.449766 systemd[1]: Created slice kubepods.slice. Oct 2 19:56:13.453880 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:56:13.456238 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:56:13.456931 kubelet[1437]: E1002 19:56:13.456909 1437 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 2 19:56:13.457018 kubelet[1437]: W1002 19:56:13.456996 1437 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Oct 2 19:56:13.466676 kubelet[1437]: I1002 19:56:13.466642 1437 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:56:13.466742 kubelet[1437]: I1002 19:56:13.466719 1437 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:56:13.465000 audit[1437]: AVC avc: denied { mac_admin } for pid=1437 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:13.465000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:13.465000 audit[1437]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fb1ad0 a1=400111ce58 a2=4000fb1aa0 a3=25 items=0 ppid=1 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:13.465000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:13.466913 kubelet[1437]: I1002 19:56:13.466896 1437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:56:13.468142 kubelet[1437]: E1002 19:56:13.468122 1437 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.13\" not found" Oct 2 19:56:13.469629 kubelet[1437]: E1002 19:56:13.469210 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e6a5f2c34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 467913268, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 467913268, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.479605 kubelet[1437]: I1002 19:56:13.479585 1437 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:56:13.480970 kubelet[1437]: E1002 19:56:13.480938 1437 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:56:13.480970 kubelet[1437]: E1002 19:56:13.480899 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19465f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194896991, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 479552742, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a19465f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.481923 kubelet[1437]: E1002 19:56:13.481839 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a199987", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194918279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 479557869, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a199987" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.482822 kubelet[1437]: E1002 19:56:13.482748 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19a797", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194921879, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 479560451, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a19a797" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.793482 kubelet[1437]: E1002 19:56:13.793374 1437 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:56:13.882637 kubelet[1437]: I1002 19:56:13.882605 1437 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:56:13.884256 kubelet[1437]: E1002 19:56:13.884217 1437 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:56:13.884300 kubelet[1437]: E1002 19:56:13.884219 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19465f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194896991, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 882541461, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a19465f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.885627 kubelet[1437]: E1002 19:56:13.885555 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a199987", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194918279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 882574450, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a199987" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.886683 kubelet[1437]: E1002 19:56:13.886612 1437 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a628e5a19a797", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 194921879, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 882579459, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a628e5a19a797" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:14.159623 kubelet[1437]: I1002 19:56:14.159478 1437 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:56:14.170796 kubelet[1437]: E1002 19:56:14.170762 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:14.557139 kubelet[1437]: E1002 19:56:14.557031 1437 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.13" not found Oct 2 19:56:14.598775 kubelet[1437]: E1002 19:56:14.598743 1437 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.13\" not found" node="10.0.0.13" Oct 2 19:56:14.684913 kubelet[1437]: I1002 19:56:14.684885 1437 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:56:14.689636 kubelet[1437]: I1002 19:56:14.689607 1437 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.13" Oct 2 19:56:14.734345 sudo[1264]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:14.733000 audit[1264]: USER_END pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:14.735759 kubelet[1437]: I1002 19:56:14.735719 1437 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:56:14.736009 env[1137]: time="2023-10-02T19:56:14.735967192Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:56:14.736381 kubelet[1437]: I1002 19:56:14.736363 1437 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:56:14.737141 kernel: kauditd_printk_skb: 411 callbacks suppressed Oct 2 19:56:14.738065 kernel: audit: type=1106 audit(1696276574.733:553): pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:14.738121 kernel: audit: type=1104 audit(1696276574.733:554): pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:14.733000 audit[1264]: CRED_DISP pid=1264 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:14.737715 sshd[1260]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:14.736000 audit[1260]: USER_END pid=1260 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:14.740008 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:44094.service: Deactivated successfully. Oct 2 19:56:14.740751 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:56:14.741307 systemd-logind[1125]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:56:14.742025 kernel: audit: type=1106 audit(1696276574.736:555): pid=1260 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:14.742083 kernel: audit: type=1104 audit(1696276574.736:556): pid=1260 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:14.736000 audit[1260]: CRED_DISP pid=1260 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:14.742099 systemd-logind[1125]: Removed session 7. Oct 2 19:56:14.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:14.746052 kernel: audit: type=1131 audit(1696276574.738:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:15.171666 kubelet[1437]: E1002 19:56:15.171624 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:15.172005 kubelet[1437]: I1002 19:56:15.171635 1437 apiserver.go:52] "Watching apiserver" Oct 2 19:56:15.174938 kubelet[1437]: I1002 19:56:15.174903 1437 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:56:15.175052 kubelet[1437]: I1002 19:56:15.174975 1437 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:56:15.179661 systemd[1]: Created slice kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice. Oct 2 19:56:15.181998 systemd[1]: Created slice kubepods-besteffort-pod8e09506b_041b_4744_a754_0e6838cb2e09.slice. Oct 2 19:56:15.275644 kubelet[1437]: I1002 19:56:15.275612 1437 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 19:56:15.290420 kubelet[1437]: I1002 19:56:15.290393 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2th\" (UniqueName: \"kubernetes.io/projected/8e09506b-041b-4744-a754-0e6838cb2e09-kube-api-access-qz2th\") pod \"kube-proxy-w92br\" (UID: \"8e09506b-041b-4744-a754-0e6838cb2e09\") " pod="kube-system/kube-proxy-w92br" Oct 2 19:56:15.290617 kubelet[1437]: I1002 19:56:15.290605 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-net\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.290709 kubelet[1437]: I1002 19:56:15.290700 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrnf6\" (UniqueName: \"kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-kube-api-access-hrnf6\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.290800 kubelet[1437]: I1002 19:56:15.290789 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-cgroup\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.290895 kubelet[1437]: I1002 19:56:15.290884 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cni-path\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.290981 kubelet[1437]: I1002 19:56:15.290970 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e09506b-041b-4744-a754-0e6838cb2e09-lib-modules\") pod \"kube-proxy-w92br\" (UID: \"8e09506b-041b-4744-a754-0e6838cb2e09\") " pod="kube-system/kube-proxy-w92br" Oct 2 19:56:15.291238 kubelet[1437]: I1002 19:56:15.291092 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-run\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291296 kubelet[1437]: I1002 19:56:15.291265 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-bpf-maps\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291547 kubelet[1437]: I1002 19:56:15.291464 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-lib-modules\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291547 kubelet[1437]: I1002 19:56:15.291508 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-config-path\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291547 kubelet[1437]: I1002 19:56:15.291543 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c7a3bb7-28ad-402c-9d68-8da6312464cd-clustermesh-secrets\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291711 kubelet[1437]: I1002 19:56:15.291568 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-kernel\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291711 kubelet[1437]: I1002 19:56:15.291587 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hubble-tls\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291711 kubelet[1437]: I1002 19:56:15.291606 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e09506b-041b-4744-a754-0e6838cb2e09-kube-proxy\") pod \"kube-proxy-w92br\" (UID: \"8e09506b-041b-4744-a754-0e6838cb2e09\") " pod="kube-system/kube-proxy-w92br" Oct 2 19:56:15.291711 kubelet[1437]: I1002 19:56:15.291623 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e09506b-041b-4744-a754-0e6838cb2e09-xtables-lock\") pod \"kube-proxy-w92br\" (UID: \"8e09506b-041b-4744-a754-0e6838cb2e09\") " pod="kube-system/kube-proxy-w92br" Oct 2 19:56:15.291711 kubelet[1437]: I1002 19:56:15.291640 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hostproc\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291711 kubelet[1437]: I1002 19:56:15.291658 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-etc-cni-netd\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291839 kubelet[1437]: I1002 19:56:15.291679 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-xtables-lock\") pod \"cilium-58xg8\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " pod="kube-system/cilium-58xg8" Oct 2 19:56:15.291839 kubelet[1437]: I1002 19:56:15.291692 1437 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:56:15.491217 kubelet[1437]: E1002 19:56:15.491081 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:15.492973 env[1137]: time="2023-10-02T19:56:15.492919142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58xg8,Uid:6c7a3bb7-28ad-402c-9d68-8da6312464cd,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:15.509285 kubelet[1437]: E1002 19:56:15.509251 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:15.510041 env[1137]: time="2023-10-02T19:56:15.509863637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w92br,Uid:8e09506b-041b-4744-a754-0e6838cb2e09,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:16.172585 kubelet[1437]: E1002 19:56:16.172542 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:16.223502 env[1137]: time="2023-10-02T19:56:16.223461233Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.224736 env[1137]: time="2023-10-02T19:56:16.224710390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.225975 env[1137]: time="2023-10-02T19:56:16.225950637Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.227541 env[1137]: time="2023-10-02T19:56:16.227512585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.229680 env[1137]: time="2023-10-02T19:56:16.229649700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.230463 env[1137]: time="2023-10-02T19:56:16.230428466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.232745 env[1137]: time="2023-10-02T19:56:16.232715768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.234364 env[1137]: time="2023-10-02T19:56:16.234339643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:16.256732 env[1137]: time="2023-10-02T19:56:16.256647466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:16.256732 env[1137]: time="2023-10-02T19:56:16.256688620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:16.256732 env[1137]: time="2023-10-02T19:56:16.256699342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:16.257100 env[1137]: time="2023-10-02T19:56:16.257042328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:16.257100 env[1137]: time="2023-10-02T19:56:16.257081866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:16.257100 env[1137]: time="2023-10-02T19:56:16.257092706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:16.257309 env[1137]: time="2023-10-02T19:56:16.257271275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de62f4a94b7b0aaf77066110e7ffe89cc50c33c79c44f929ad715245f91fdf4e pid=1499 runtime=io.containerd.runc.v2 Oct 2 19:56:16.257309 env[1137]: time="2023-10-02T19:56:16.257294020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417 pid=1500 runtime=io.containerd.runc.v2 Oct 2 19:56:16.283315 systemd[1]: Started cri-containerd-4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417.scope. Oct 2 19:56:16.284331 systemd[1]: Started cri-containerd-de62f4a94b7b0aaf77066110e7ffe89cc50c33c79c44f929ad715245f91fdf4e.scope. Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.314661 kernel: audit: type=1400 audit(1696276576.310:558): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.314713 kernel: audit: type=1400 audit(1696276576.310:559): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.314731 kernel: audit: type=1400 audit(1696276576.310:560): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.317856 kernel: audit: type=1400 audit(1696276576.310:561): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.317915 kernel: audit: type=1400 audit(1696276576.310:562): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.310000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit: BPF prog-id=67 op=LOAD Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=1500 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646462653962393036623962313031356362396333323238666334 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=1500 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646462653962393036623962313031356362396333323238666334 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.312000 audit: BPF prog-id=68 op=LOAD Oct 2 19:56:16.312000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=1500 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646462653962393036623962313031356362396333323238666334 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.315000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.316000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.316000 audit: BPF prog-id=70 op=LOAD Oct 2 19:56:16.313000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.313000 audit: BPF prog-id=69 op=LOAD Oct 2 19:56:16.313000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=1500 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646462653962393036623962313031356362396333323238666334 Oct 2 19:56:16.318000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:56:16.318000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit[1518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1499 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.318000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.318000 audit: BPF prog-id=71 op=LOAD Oct 2 19:56:16.318000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=1500 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465646462653962393036623962313031356362396333323238666334 Oct 2 19:56:16.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465363266346139346237623061616637373036363131306537666665 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1499 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465363266346139346237623061616637373036363131306537666665 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.319000 audit: BPF prog-id=72 op=LOAD Oct 2 19:56:16.319000 audit[1518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1499 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465363266346139346237623061616637373036363131306537666665 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit: BPF prog-id=73 op=LOAD Oct 2 19:56:16.320000 audit[1518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1499 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465363266346139346237623061616637373036363131306537666665 Oct 2 19:56:16.320000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:56:16.320000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { perfmon } for pid=1518 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit[1518]: AVC avc: denied { bpf } for pid=1518 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:16.320000 audit: BPF prog-id=74 op=LOAD Oct 2 19:56:16.320000 audit[1518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1499 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:16.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465363266346139346237623061616637373036363131306537666665 Oct 2 19:56:16.333135 env[1137]: time="2023-10-02T19:56:16.333094938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58xg8,Uid:6c7a3bb7-28ad-402c-9d68-8da6312464cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\"" Oct 2 19:56:16.334062 kubelet[1437]: E1002 19:56:16.334042 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:16.335421 env[1137]: time="2023-10-02T19:56:16.335389965Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:56:16.337224 env[1137]: time="2023-10-02T19:56:16.337191031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w92br,Uid:8e09506b-041b-4744-a754-0e6838cb2e09,Namespace:kube-system,Attempt:0,} returns sandbox id \"de62f4a94b7b0aaf77066110e7ffe89cc50c33c79c44f929ad715245f91fdf4e\"" Oct 2 19:56:16.337757 kubelet[1437]: E1002 19:56:16.337736 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:16.398728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255089238.mount: Deactivated successfully. Oct 2 19:56:17.172932 kubelet[1437]: E1002 19:56:17.172886 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:18.173507 kubelet[1437]: E1002 19:56:18.173467 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:19.174328 kubelet[1437]: E1002 19:56:19.174284 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:20.175081 kubelet[1437]: E1002 19:56:20.175034 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:20.284371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89188867.mount: Deactivated successfully. Oct 2 19:56:21.176098 kubelet[1437]: E1002 19:56:21.176055 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:22.176598 kubelet[1437]: E1002 19:56:22.176555 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:22.578179 env[1137]: time="2023-10-02T19:56:22.578073454Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:22.579313 env[1137]: time="2023-10-02T19:56:22.579284908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:22.580699 env[1137]: time="2023-10-02T19:56:22.580665096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:22.581384 env[1137]: time="2023-10-02T19:56:22.581358150Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:56:22.582274 env[1137]: time="2023-10-02T19:56:22.582235357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 19:56:22.583451 env[1137]: time="2023-10-02T19:56:22.583416490Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:56:22.594622 env[1137]: time="2023-10-02T19:56:22.594585118Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\"" Oct 2 19:56:22.595284 env[1137]: time="2023-10-02T19:56:22.595259137Z" level=info msg="StartContainer for \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\"" Oct 2 19:56:22.612637 systemd[1]: Started cri-containerd-e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2.scope. Oct 2 19:56:22.634885 systemd[1]: cri-containerd-e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2.scope: Deactivated successfully. Oct 2 19:56:22.638403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2-rootfs.mount: Deactivated successfully. Oct 2 19:56:22.802207 env[1137]: time="2023-10-02T19:56:22.802130993Z" level=info msg="shim disconnected" id=e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2 Oct 2 19:56:22.802207 env[1137]: time="2023-10-02T19:56:22.802178561Z" level=warning msg="cleaning up after shim disconnected" id=e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2 namespace=k8s.io Oct 2 19:56:22.802207 env[1137]: time="2023-10-02T19:56:22.802187860Z" level=info msg="cleaning up dead shim" Oct 2 19:56:22.810597 env[1137]: time="2023-10-02T19:56:22.810555498Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1600 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:22.811020 env[1137]: time="2023-10-02T19:56:22.810910530Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:56:22.811161 env[1137]: time="2023-10-02T19:56:22.811115784Z" level=error msg="Failed to pipe stdout of container \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\"" error="reading from a closed fifo" Oct 2 19:56:22.811217 env[1137]: time="2023-10-02T19:56:22.811191527Z" level=error msg="Failed to pipe stderr of container \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\"" error="reading from a closed fifo" Oct 2 19:56:22.812533 env[1137]: time="2023-10-02T19:56:22.812478327Z" level=error msg="StartContainer for \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:22.813205 kubelet[1437]: E1002 19:56:22.812786 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2" Oct 2 19:56:22.813205 kubelet[1437]: E1002 19:56:22.813125 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:22.813205 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:22.813205 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:56:22.813378 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hrnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:22.813462 kubelet[1437]: E1002 19:56:22.813175 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:23.176965 kubelet[1437]: E1002 19:56:23.176911 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:23.398008 kubelet[1437]: E1002 19:56:23.397981 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:23.400229 env[1137]: time="2023-10-02T19:56:23.400188763Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:56:23.416177 env[1137]: time="2023-10-02T19:56:23.416126413Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\"" Oct 2 19:56:23.416902 env[1137]: time="2023-10-02T19:56:23.416864382Z" level=info msg="StartContainer for \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\"" Oct 2 19:56:23.434057 systemd[1]: Started cri-containerd-e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976.scope. Oct 2 19:56:23.456317 systemd[1]: cri-containerd-e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976.scope: Deactivated successfully. Oct 2 19:56:23.462132 env[1137]: time="2023-10-02T19:56:23.462084400Z" level=info msg="shim disconnected" id=e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976 Oct 2 19:56:23.462132 env[1137]: time="2023-10-02T19:56:23.462134035Z" level=warning msg="cleaning up after shim disconnected" id=e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976 namespace=k8s.io Oct 2 19:56:23.462320 env[1137]: time="2023-10-02T19:56:23.462144058Z" level=info msg="cleaning up dead shim" Oct 2 19:56:23.471286 env[1137]: time="2023-10-02T19:56:23.471241571Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1637 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:23.471665 env[1137]: time="2023-10-02T19:56:23.471616223Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:56:23.474520 env[1137]: time="2023-10-02T19:56:23.472895169Z" level=error msg="Failed to pipe stderr of container \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\"" error="reading from a closed fifo" Oct 2 19:56:23.474756 env[1137]: time="2023-10-02T19:56:23.474729689Z" level=error msg="Failed to pipe stdout of container \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\"" error="reading from a closed fifo" Oct 2 19:56:23.476304 env[1137]: time="2023-10-02T19:56:23.476254108Z" level=error msg="StartContainer for \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:23.476503 kubelet[1437]: E1002 19:56:23.476482 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976" Oct 2 19:56:23.476625 kubelet[1437]: E1002 19:56:23.476611 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:23.476625 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:23.476625 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:56:23.476715 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hrnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:23.476715 kubelet[1437]: E1002 19:56:23.476648 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:24.177599 kubelet[1437]: E1002 19:56:24.177558 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:24.400982 kubelet[1437]: I1002 19:56:24.400955 1437 scope.go:115] "RemoveContainer" containerID="e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2" Oct 2 19:56:24.401329 kubelet[1437]: I1002 19:56:24.401314 1437 scope.go:115] "RemoveContainer" containerID="e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2" Oct 2 19:56:24.402543 env[1137]: time="2023-10-02T19:56:24.402502410Z" level=info msg="RemoveContainer for \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\"" Oct 2 19:56:24.408393 env[1137]: time="2023-10-02T19:56:24.408340300Z" level=info msg="RemoveContainer for \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\"" Oct 2 19:56:24.408489 env[1137]: time="2023-10-02T19:56:24.408455840Z" level=error msg="RemoveContainer for \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\" failed" error="failed to set removing state for container \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\": container is already in removing state" Oct 2 19:56:24.408994 kubelet[1437]: E1002 19:56:24.408640 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\": container is already in removing state" containerID="e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2" Oct 2 19:56:24.408994 kubelet[1437]: E1002 19:56:24.408684 1437 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2": container is already in removing state; Skipping pod "cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)" Oct 2 19:56:24.408994 kubelet[1437]: E1002 19:56:24.408749 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:24.408994 kubelet[1437]: E1002 19:56:24.408962 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:24.423614 env[1137]: time="2023-10-02T19:56:24.423574398Z" level=info msg="RemoveContainer for \"e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2\" returns successfully" Oct 2 19:56:24.431406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400627340.mount: Deactivated successfully. Oct 2 19:56:24.823972 env[1137]: time="2023-10-02T19:56:24.823867231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:24.825086 env[1137]: time="2023-10-02T19:56:24.825060010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95195e68173b6cfcdd3125d7bbffa6759189df53b60ffe7a72256059cd5dd7af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:24.826132 env[1137]: time="2023-10-02T19:56:24.826104692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:24.827870 env[1137]: time="2023-10-02T19:56:24.827836091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:24.828343 env[1137]: time="2023-10-02T19:56:24.828310791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:95195e68173b6cfcdd3125d7bbffa6759189df53b60ffe7a72256059cd5dd7af\"" Oct 2 19:56:24.830090 env[1137]: time="2023-10-02T19:56:24.830058270Z" level=info msg="CreateContainer within sandbox \"de62f4a94b7b0aaf77066110e7ffe89cc50c33c79c44f929ad715245f91fdf4e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:56:24.837848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566437101.mount: Deactivated successfully. Oct 2 19:56:24.840965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807204890.mount: Deactivated successfully. Oct 2 19:56:24.844839 env[1137]: time="2023-10-02T19:56:24.844792514Z" level=info msg="CreateContainer within sandbox \"de62f4a94b7b0aaf77066110e7ffe89cc50c33c79c44f929ad715245f91fdf4e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15d471f88d454a9ccac2bad3cbadfd7e24c255d53a7572675893e16f4500b614\"" Oct 2 19:56:24.845580 env[1137]: time="2023-10-02T19:56:24.845548882Z" level=info msg="StartContainer for \"15d471f88d454a9ccac2bad3cbadfd7e24c255d53a7572675893e16f4500b614\"" Oct 2 19:56:24.862405 systemd[1]: Started cri-containerd-15d471f88d454a9ccac2bad3cbadfd7e24c255d53a7572675893e16f4500b614.scope. Oct 2 19:56:24.899000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.901421 kernel: kauditd_printk_skb: 109 callbacks suppressed Oct 2 19:56:24.901467 kernel: audit: type=1400 audit(1696276584.899:594): avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.899000 audit[1658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1499 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:24.905565 kernel: audit: type=1300 audit(1696276584.899:594): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1499 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:24.905612 kernel: audit: type=1327 audit(1696276584.899:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135643437316638386434353461396363616332626164336362616466 Oct 2 19:56:24.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135643437316638386434353461396363616332626164336362616466 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.909484 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.909527 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.911260 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.912815 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.916457 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.916518 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.920110 kernel: audit: type=1400 audit(1696276584.902:595): avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.902000 audit: BPF prog-id=75 op=LOAD Oct 2 19:56:24.902000 audit[1658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1499 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:24.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135643437316638386434353461396363616332626164336362616466 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.904000 audit: BPF prog-id=76 op=LOAD Oct 2 19:56:24.904000 audit[1658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1499 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:24.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135643437316638386434353461396363616332626164336362616466 Oct 2 19:56:24.906000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:56:24.906000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { perfmon } for pid=1658 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit[1658]: AVC avc: denied { bpf } for pid=1658 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:24.906000 audit: BPF prog-id=77 op=LOAD Oct 2 19:56:24.906000 audit[1658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1499 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:24.906000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135643437316638386434353461396363616332626164336362616466 Oct 2 19:56:24.933363 env[1137]: time="2023-10-02T19:56:24.933254698Z" level=info msg="StartContainer for \"15d471f88d454a9ccac2bad3cbadfd7e24c255d53a7572675893e16f4500b614\" returns successfully" Oct 2 19:56:25.068000 audit[1708]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.068000 audit[1708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb59c5a0 a2=0 a3=ffff8b3ce6c0 items=0 ppid=1669 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:25.068000 audit[1709]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=1709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.068000 audit[1709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffebbf930 a2=0 a3=ffffb4b326c0 items=0 ppid=1669 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:25.070000 audit[1712]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=1712 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.070000 audit[1712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7550150 a2=0 a3=ffff8d62e6c0 items=0 ppid=1669 pid=1712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:25.071000 audit[1713]: NETFILTER_CFG table=nat:17 family=10 entries=1 op=nft_register_chain pid=1713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.071000 audit[1713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9696630 a2=0 a3=ffffa64b46c0 items=0 ppid=1669 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:25.072000 audit[1714]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.072000 audit[1714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3b97ee0 a2=0 a3=ffffb2a0e6c0 items=0 ppid=1669 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:25.072000 audit[1715]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.072000 audit[1715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdee7b490 a2=0 a3=ffffa681c6c0 items=0 ppid=1669 pid=1715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.072000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:25.172000 audit[1716]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.172000 audit[1716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff14a5e30 a2=0 a3=ffffa2a326c0 items=0 ppid=1669 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:25.175000 audit[1718]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.175000 audit[1718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffceb727e0 a2=0 a3=ffff80e046c0 items=0 ppid=1669 pid=1718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:56:25.178092 kubelet[1437]: E1002 19:56:25.178069 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:25.178000 audit[1721]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.178000 audit[1721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe2cd7ee0 a2=0 a3=ffffaa9256c0 items=0 ppid=1669 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.178000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:56:25.180000 audit[1722]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.180000 audit[1722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd64de80 a2=0 a3=ffff886696c0 items=0 ppid=1669 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:25.182000 audit[1724]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.182000 audit[1724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0690400 a2=0 a3=ffffaab896c0 items=0 ppid=1669 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:25.183000 audit[1725]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.183000 audit[1725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd50f2650 a2=0 a3=ffff996616c0 items=0 ppid=1669 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:25.185000 audit[1727]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.185000 audit[1727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc9fe6ac0 a2=0 a3=ffffb7c4a6c0 items=0 ppid=1669 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:25.188000 audit[1730]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1730 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.188000 audit[1730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc4f33de0 a2=0 a3=ffffb85bd6c0 items=0 ppid=1669 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:56:25.189000 audit[1731]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.189000 audit[1731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4889530 a2=0 a3=ffff86fe86c0 items=0 ppid=1669 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:25.192000 audit[1733]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.192000 audit[1733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe6615e0 a2=0 a3=ffffbe6dd6c0 items=0 ppid=1669 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:25.193000 audit[1734]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.193000 audit[1734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe43d5f70 a2=0 a3=ffffafe096c0 items=0 ppid=1669 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:25.195000 audit[1736]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.195000 audit[1736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd0b0b7f0 a2=0 a3=ffffb914e6c0 items=0 ppid=1669 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:25.198000 audit[1739]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1739 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.198000 audit[1739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd0de7780 a2=0 a3=ffffa1a706c0 items=0 ppid=1669 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:25.202000 audit[1742]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.202000 audit[1742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5e595d0 a2=0 a3=ffffb08866c0 items=0 ppid=1669 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:25.203000 audit[1743]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.203000 audit[1743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1ed3490 a2=0 a3=ffff7f7366c0 items=0 ppid=1669 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:25.205000 audit[1745]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.205000 audit[1745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe60ed5e0 a2=0 a3=ffffa4d526c0 items=0 ppid=1669 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:25.230000 audit[1750]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.230000 audit[1750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd156c2a0 a2=0 a3=ffff9a2516c0 items=0 ppid=1669 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:25.235000 audit[1755]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.235000 audit[1755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd263600 a2=0 a3=ffff9ef9f6c0 items=0 ppid=1669 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:25.238000 audit[1757]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:25.238000 audit[1757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd8250480 a2=0 a3=ffffb011f6c0 items=0 ppid=1669 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:25.247000 audit[1759]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:25.247000 audit[1759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4956 a0=3 a1=ffffd91eaea0 a2=0 a3=ffff90f8b6c0 items=0 ppid=1669 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:25.259000 audit[1759]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:25.259000 audit[1759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd91eaea0 a2=0 a3=ffff90f8b6c0 items=0 ppid=1669 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:25.260000 audit[1765]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.260000 audit[1765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd87fc280 a2=0 a3=ffff89d066c0 items=0 ppid=1669 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:25.263000 audit[1767]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.263000 audit[1767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc7efa090 a2=0 a3=ffff8798d6c0 items=0 ppid=1669 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:56:25.267000 audit[1770]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.267000 audit[1770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff728de40 a2=0 a3=ffff806496c0 items=0 ppid=1669 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:56:25.268000 audit[1771]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.268000 audit[1771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb653ca0 a2=0 a3=ffffad86b6c0 items=0 ppid=1669 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:25.270000 audit[1773]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.270000 audit[1773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd36d55c0 a2=0 a3=ffffbafe96c0 items=0 ppid=1669 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:25.271000 audit[1774]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.271000 audit[1774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9ec4030 a2=0 a3=ffffa26536c0 items=0 ppid=1669 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:25.274000 audit[1776]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.274000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff2fbaff0 a2=0 a3=ffff980d26c0 items=0 ppid=1669 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:56:25.277000 audit[1779]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.277000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe0a39cf0 a2=0 a3=ffff959136c0 items=0 ppid=1669 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:25.279000 audit[1780]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.279000 audit[1780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8f11050 a2=0 a3=ffffbb5a66c0 items=0 ppid=1669 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:25.281000 audit[1782]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.281000 audit[1782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffa341390 a2=0 a3=ffff98b6b6c0 items=0 ppid=1669 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:25.282000 audit[1783]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.282000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0fe11d0 a2=0 a3=ffffad2106c0 items=0 ppid=1669 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:25.285000 audit[1785]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.285000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4dbfcf0 a2=0 a3=ffffafcf46c0 items=0 ppid=1669 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:25.288000 audit[1788]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.288000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2259be0 a2=0 a3=ffff7faa26c0 items=0 ppid=1669 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:25.292000 audit[1791]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.292000 audit[1791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe44c7490 a2=0 a3=ffff968eb6c0 items=0 ppid=1669 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:56:25.293000 audit[1792]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.293000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd9945970 a2=0 a3=ffffa3c6f6c0 items=0 ppid=1669 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:25.295000 audit[1794]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.295000 audit[1794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe3a6fed0 a2=0 a3=ffff95ec96c0 items=0 ppid=1669 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:25.299000 audit[1797]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.299000 audit[1797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff1e95500 a2=0 a3=ffff97f366c0 items=0 ppid=1669 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:25.300000 audit[1798]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.300000 audit[1798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec78c390 a2=0 a3=ffff9b79b6c0 items=0 ppid=1669 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:25.302000 audit[1800]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.302000 audit[1800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc65da1e0 a2=0 a3=ffff9f6576c0 items=0 ppid=1669 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:25.305000 audit[1803]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.305000 audit[1803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd6aca6c0 a2=0 a3=ffffbe3726c0 items=0 ppid=1669 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:25.307000 audit[1804]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.307000 audit[1804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec8f4c40 a2=0 a3=ffff88a426c0 items=0 ppid=1669 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:25.309000 audit[1806]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:25.309000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdef8c8f0 a2=0 a3=ffffafea36c0 items=0 ppid=1669 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:25.312000 audit[1808]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:25.312000 audit[1808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc93d8c10 a2=0 a3=ffffa99396c0 items=0 ppid=1669 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.312000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:25.312000 audit[1808]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:25.312000 audit[1808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffc93d8c10 a2=0 a3=ffffa99396c0 items=0 ppid=1669 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:25.312000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:25.404268 kubelet[1437]: E1002 19:56:25.404239 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:25.404735 kubelet[1437]: E1002 19:56:25.404718 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:25.405127 kubelet[1437]: E1002 19:56:25.405110 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:25.427776 kubelet[1437]: I1002 19:56:25.427670 1437 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w92br" podStartSLOduration=2.9372742020000002 podCreationTimestamp="2023-10-02 19:56:14 +0000 UTC" firstStartedPulling="2023-10-02 19:56:16.338166539 +0000 UTC m=+3.649059716" lastFinishedPulling="2023-10-02 19:56:24.828511186 +0000 UTC m=+12.139404364" observedRunningTime="2023-10-02 19:56:25.427093434 +0000 UTC m=+12.737986612" watchObservedRunningTime="2023-10-02 19:56:25.42761885 +0000 UTC m=+12.738511988" Oct 2 19:56:25.906098 kubelet[1437]: W1002 19:56:25.905967 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice/cri-containerd-e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2.scope WatchSource:0}: container "e290cf538e29e07a808f41e5194150f342b6b9a6eda5d5a68f8af25eae2185f2" in namespace "k8s.io": not found Oct 2 19:56:26.178470 kubelet[1437]: E1002 19:56:26.178359 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:26.406219 kubelet[1437]: E1002 19:56:26.406183 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:27.178687 kubelet[1437]: E1002 19:56:27.178644 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:28.179340 kubelet[1437]: E1002 19:56:28.179308 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:29.012788 kubelet[1437]: W1002 19:56:29.012748 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice/cri-containerd-e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976.scope WatchSource:0}: task e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976 not found: not found Oct 2 19:56:29.180044 kubelet[1437]: E1002 19:56:29.180000 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:30.180443 kubelet[1437]: E1002 19:56:30.180409 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.181930 kubelet[1437]: E1002 19:56:31.181896 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.182552 kubelet[1437]: E1002 19:56:32.182502 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:33.169426 kubelet[1437]: E1002 19:56:33.169388 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:33.183093 kubelet[1437]: E1002 19:56:33.183054 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.183542 kubelet[1437]: E1002 19:56:34.183490 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:35.184160 kubelet[1437]: E1002 19:56:35.184083 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.185002 kubelet[1437]: E1002 19:56:36.184961 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:37.185893 kubelet[1437]: E1002 19:56:37.185865 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:37.357395 kubelet[1437]: E1002 19:56:37.357359 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:37.359897 env[1137]: time="2023-10-02T19:56:37.359857539Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:56:37.368742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767580936.mount: Deactivated successfully. Oct 2 19:56:37.371981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038725014.mount: Deactivated successfully. Oct 2 19:56:37.377317 env[1137]: time="2023-10-02T19:56:37.377257827Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\"" Oct 2 19:56:37.378461 env[1137]: time="2023-10-02T19:56:37.377684651Z" level=info msg="StartContainer for \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\"" Oct 2 19:56:37.392727 systemd[1]: Started cri-containerd-2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631.scope. Oct 2 19:56:37.419208 systemd[1]: cri-containerd-2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631.scope: Deactivated successfully. Oct 2 19:56:37.534709 env[1137]: time="2023-10-02T19:56:37.534657874Z" level=info msg="shim disconnected" id=2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631 Oct 2 19:56:37.534951 env[1137]: time="2023-10-02T19:56:37.534932152Z" level=warning msg="cleaning up after shim disconnected" id=2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631 namespace=k8s.io Oct 2 19:56:37.535036 env[1137]: time="2023-10-02T19:56:37.535022153Z" level=info msg="cleaning up dead shim" Oct 2 19:56:37.543139 env[1137]: time="2023-10-02T19:56:37.543099325Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1834 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:37.543528 env[1137]: time="2023-10-02T19:56:37.543472437Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:56:37.543913 env[1137]: time="2023-10-02T19:56:37.543683331Z" level=error msg="Failed to pipe stdout of container \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\"" error="reading from a closed fifo" Oct 2 19:56:37.544081 env[1137]: time="2023-10-02T19:56:37.543704313Z" level=error msg="Failed to pipe stderr of container \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\"" error="reading from a closed fifo" Oct 2 19:56:37.545913 env[1137]: time="2023-10-02T19:56:37.545872325Z" level=error msg="StartContainer for \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:37.546249 kubelet[1437]: E1002 19:56:37.546209 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631" Oct 2 19:56:37.546330 kubelet[1437]: E1002 19:56:37.546308 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:37.546330 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:37.546330 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:56:37.546330 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hrnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:37.546462 kubelet[1437]: E1002 19:56:37.546346 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:38.186729 kubelet[1437]: E1002 19:56:38.186690 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.367254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631-rootfs.mount: Deactivated successfully. Oct 2 19:56:38.441059 kubelet[1437]: I1002 19:56:38.440942 1437 scope.go:115] "RemoveContainer" containerID="e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976" Oct 2 19:56:38.441466 kubelet[1437]: I1002 19:56:38.441286 1437 scope.go:115] "RemoveContainer" containerID="e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976" Oct 2 19:56:38.442528 env[1137]: time="2023-10-02T19:56:38.442495512Z" level=info msg="RemoveContainer for \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\"" Oct 2 19:56:38.442934 env[1137]: time="2023-10-02T19:56:38.442907595Z" level=info msg="RemoveContainer for \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\"" Oct 2 19:56:38.443135 env[1137]: time="2023-10-02T19:56:38.443096730Z" level=error msg="RemoveContainer for \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\" failed" error="failed to set removing state for container \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\": container is already in removing state" Oct 2 19:56:38.443337 kubelet[1437]: E1002 19:56:38.443318 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\": container is already in removing state" containerID="e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976" Oct 2 19:56:38.443390 kubelet[1437]: E1002 19:56:38.443352 1437 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976": container is already in removing state; Skipping pod "cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)" Oct 2 19:56:38.443422 kubelet[1437]: E1002 19:56:38.443406 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:38.443697 kubelet[1437]: E1002 19:56:38.443672 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:38.457428 env[1137]: time="2023-10-02T19:56:38.457360707Z" level=info msg="RemoveContainer for \"e95d17fff5c043676b1a59e1fda24529a3570d511c10c1e951d14f8c7f960976\" returns successfully" Oct 2 19:56:39.187704 kubelet[1437]: E1002 19:56:39.187648 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.188026 kubelet[1437]: E1002 19:56:40.187953 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.639182 kubelet[1437]: W1002 19:56:40.639132 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice/cri-containerd-2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631.scope WatchSource:0}: task 2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631 not found: not found Oct 2 19:56:41.188624 kubelet[1437]: E1002 19:56:41.188547 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:42.189637 kubelet[1437]: E1002 19:56:42.189574 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:43.190445 kubelet[1437]: E1002 19:56:43.190402 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.191501 kubelet[1437]: E1002 19:56:44.191441 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:45.192468 kubelet[1437]: E1002 19:56:45.192413 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.193210 kubelet[1437]: E1002 19:56:46.193151 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.193672 kubelet[1437]: E1002 19:56:47.193611 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:48.193837 kubelet[1437]: E1002 19:56:48.193794 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.195121 kubelet[1437]: E1002 19:56:49.195070 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:50.196243 kubelet[1437]: E1002 19:56:50.196174 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.197028 kubelet[1437]: E1002 19:56:51.196943 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.197817 kubelet[1437]: E1002 19:56:52.197781 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.997778 update_engine[1127]: I1002 19:56:52.997725 1127 update_attempter.cc:505] Updating boot flags... Oct 2 19:56:53.169915 kubelet[1437]: E1002 19:56:53.169879 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:53.198823 kubelet[1437]: E1002 19:56:53.198787 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:53.357586 kubelet[1437]: E1002 19:56:53.357434 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:53.357768 kubelet[1437]: E1002 19:56:53.357695 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:56:54.199523 kubelet[1437]: E1002 19:56:54.199479 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.199769 kubelet[1437]: E1002 19:56:55.199733 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:56.200602 kubelet[1437]: E1002 19:56:56.200567 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.201893 kubelet[1437]: E1002 19:56:57.201851 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:58.202695 kubelet[1437]: E1002 19:56:58.202652 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:59.202795 kubelet[1437]: E1002 19:56:59.202750 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:00.203616 kubelet[1437]: E1002 19:57:00.203550 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.204638 kubelet[1437]: E1002 19:57:01.204588 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.205211 kubelet[1437]: E1002 19:57:02.205156 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:03.205852 kubelet[1437]: E1002 19:57:03.205808 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:04.206275 kubelet[1437]: E1002 19:57:04.206212 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:05.206965 kubelet[1437]: E1002 19:57:05.206938 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.208362 kubelet[1437]: E1002 19:57:06.208305 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.357771 kubelet[1437]: E1002 19:57:06.357735 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:06.359753 env[1137]: time="2023-10-02T19:57:06.359712687Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:57:06.369160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965707587.mount: Deactivated successfully. Oct 2 19:57:06.374966 env[1137]: time="2023-10-02T19:57:06.374918119Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\"" Oct 2 19:57:06.375642 env[1137]: time="2023-10-02T19:57:06.375360387Z" level=info msg="StartContainer for \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\"" Oct 2 19:57:06.392989 systemd[1]: Started cri-containerd-aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b.scope. Oct 2 19:57:06.412663 systemd[1]: cri-containerd-aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b.scope: Deactivated successfully. Oct 2 19:57:06.421501 env[1137]: time="2023-10-02T19:57:06.421448312Z" level=info msg="shim disconnected" id=aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b Oct 2 19:57:06.421501 env[1137]: time="2023-10-02T19:57:06.421499876Z" level=warning msg="cleaning up after shim disconnected" id=aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b namespace=k8s.io Oct 2 19:57:06.421729 env[1137]: time="2023-10-02T19:57:06.421510197Z" level=info msg="cleaning up dead shim" Oct 2 19:57:06.429561 env[1137]: time="2023-10-02T19:57:06.429511958Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1888 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:06.429829 env[1137]: time="2023-10-02T19:57:06.429777736Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:06.430212 env[1137]: time="2023-10-02T19:57:06.430166041Z" level=error msg="Failed to pipe stderr of container \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\"" error="reading from a closed fifo" Oct 2 19:57:06.430272 env[1137]: time="2023-10-02T19:57:06.430166961Z" level=error msg="Failed to pipe stdout of container \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\"" error="reading from a closed fifo" Oct 2 19:57:06.431753 env[1137]: time="2023-10-02T19:57:06.431705221Z" level=error msg="StartContainer for \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:06.431960 kubelet[1437]: E1002 19:57:06.431934 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b" Oct 2 19:57:06.432082 kubelet[1437]: E1002 19:57:06.432066 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:06.432082 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:06.432082 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:57:06.432082 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hrnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:06.432228 kubelet[1437]: E1002 19:57:06.432117 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:06.483036 kubelet[1437]: I1002 19:57:06.482474 1437 scope.go:115] "RemoveContainer" containerID="2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631" Oct 2 19:57:06.483036 kubelet[1437]: I1002 19:57:06.482804 1437 scope.go:115] "RemoveContainer" containerID="2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631" Oct 2 19:57:06.483770 env[1137]: time="2023-10-02T19:57:06.483735734Z" level=info msg="RemoveContainer for \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\"" Oct 2 19:57:06.484183 env[1137]: time="2023-10-02T19:57:06.484148361Z" level=info msg="RemoveContainer for \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\"" Oct 2 19:57:06.484346 env[1137]: time="2023-10-02T19:57:06.484313892Z" level=error msg="RemoveContainer for \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\" failed" error="failed to set removing state for container \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\": container is already in removing state" Oct 2 19:57:06.484473 kubelet[1437]: E1002 19:57:06.484457 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\": container is already in removing state" containerID="2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631" Oct 2 19:57:06.484523 kubelet[1437]: E1002 19:57:06.484488 1437 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631": container is already in removing state; Skipping pod "cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)" Oct 2 19:57:06.484562 kubelet[1437]: E1002 19:57:06.484551 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:06.484902 kubelet[1437]: E1002 19:57:06.484766 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:06.486395 env[1137]: time="2023-10-02T19:57:06.486362225Z" level=info msg="RemoveContainer for \"2e58f7b6cd5868176d081fcaa72917c4a2fdb9e8f573164e6f02569cd2ae8631\" returns successfully" Oct 2 19:57:07.208448 kubelet[1437]: E1002 19:57:07.208392 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.368007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b-rootfs.mount: Deactivated successfully. Oct 2 19:57:08.209309 kubelet[1437]: E1002 19:57:08.209271 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.209984 kubelet[1437]: E1002 19:57:09.209937 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.526249 kubelet[1437]: W1002 19:57:09.526212 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice/cri-containerd-aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b.scope WatchSource:0}: task aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b not found: not found Oct 2 19:57:10.211102 kubelet[1437]: E1002 19:57:10.211052 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.211941 kubelet[1437]: E1002 19:57:11.211908 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.213089 kubelet[1437]: E1002 19:57:12.213032 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:13.169329 kubelet[1437]: E1002 19:57:13.169258 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:13.213414 kubelet[1437]: E1002 19:57:13.213366 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.213738 kubelet[1437]: E1002 19:57:14.213689 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:15.214852 kubelet[1437]: E1002 19:57:15.214817 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.216209 kubelet[1437]: E1002 19:57:16.216153 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.216664 kubelet[1437]: E1002 19:57:17.216611 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:18.216835 kubelet[1437]: E1002 19:57:18.216788 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:18.358034 kubelet[1437]: E1002 19:57:18.357963 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:18.358273 kubelet[1437]: E1002 19:57:18.358251 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:19.217883 kubelet[1437]: E1002 19:57:19.217842 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:20.218688 kubelet[1437]: E1002 19:57:20.218640 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.219605 kubelet[1437]: E1002 19:57:21.219575 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.220871 kubelet[1437]: E1002 19:57:22.220835 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:23.221309 kubelet[1437]: E1002 19:57:23.221277 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.221694 kubelet[1437]: E1002 19:57:24.221663 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:25.222655 kubelet[1437]: E1002 19:57:25.222622 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:26.223078 kubelet[1437]: E1002 19:57:26.223044 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.224372 kubelet[1437]: E1002 19:57:27.224332 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:28.225205 kubelet[1437]: E1002 19:57:28.225162 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.226069 kubelet[1437]: E1002 19:57:29.226030 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.357478 kubelet[1437]: E1002 19:57:29.357450 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:29.357885 kubelet[1437]: E1002 19:57:29.357868 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:30.226490 kubelet[1437]: E1002 19:57:30.226453 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.227317 kubelet[1437]: E1002 19:57:31.227289 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.227995 kubelet[1437]: E1002 19:57:32.227961 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:33.170149 kubelet[1437]: E1002 19:57:33.170098 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:33.229532 kubelet[1437]: E1002 19:57:33.229459 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:34.230608 kubelet[1437]: E1002 19:57:34.230564 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:35.231288 kubelet[1437]: E1002 19:57:35.231246 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.232007 kubelet[1437]: E1002 19:57:36.231962 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:37.232479 kubelet[1437]: E1002 19:57:37.232441 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:38.233293 kubelet[1437]: E1002 19:57:38.233250 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:39.234436 kubelet[1437]: E1002 19:57:39.234390 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:40.234882 kubelet[1437]: E1002 19:57:40.234839 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.235728 kubelet[1437]: E1002 19:57:41.235660 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.236006 kubelet[1437]: E1002 19:57:42.235970 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.357974 kubelet[1437]: E1002 19:57:42.357929 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:42.358191 kubelet[1437]: E1002 19:57:42.358162 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:42.358231 kubelet[1437]: E1002 19:57:42.358205 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:43.236815 kubelet[1437]: E1002 19:57:43.236780 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:44.237459 kubelet[1437]: E1002 19:57:44.237431 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.238513 kubelet[1437]: E1002 19:57:45.238481 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:46.240041 kubelet[1437]: E1002 19:57:46.239983 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.240675 kubelet[1437]: E1002 19:57:47.240598 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:48.241097 kubelet[1437]: E1002 19:57:48.241027 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.241647 kubelet[1437]: E1002 19:57:49.241576 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:50.242052 kubelet[1437]: E1002 19:57:50.241991 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.242414 kubelet[1437]: E1002 19:57:51.242372 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.243551 kubelet[1437]: E1002 19:57:52.243520 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:53.170316 kubelet[1437]: E1002 19:57:53.170254 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:53.244944 kubelet[1437]: E1002 19:57:53.244888 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.245163 kubelet[1437]: E1002 19:57:54.245103 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:55.245835 kubelet[1437]: E1002 19:57:55.245805 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.247134 kubelet[1437]: E1002 19:57:56.247096 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.357174 kubelet[1437]: E1002 19:57:56.357140 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:56.359225 env[1137]: time="2023-10-02T19:57:56.359158800Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:57:56.370844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071282663.mount: Deactivated successfully. Oct 2 19:57:56.374206 env[1137]: time="2023-10-02T19:57:56.374150711Z" level=info msg="CreateContainer within sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\"" Oct 2 19:57:56.374698 env[1137]: time="2023-10-02T19:57:56.374669590Z" level=info msg="StartContainer for \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\"" Oct 2 19:57:56.393211 systemd[1]: Started cri-containerd-8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6.scope. Oct 2 19:57:56.420206 systemd[1]: cri-containerd-8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6.scope: Deactivated successfully. Oct 2 19:57:56.427427 env[1137]: time="2023-10-02T19:57:56.427373038Z" level=info msg="shim disconnected" id=8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6 Oct 2 19:57:56.427647 env[1137]: time="2023-10-02T19:57:56.427626877Z" level=warning msg="cleaning up after shim disconnected" id=8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6 namespace=k8s.io Oct 2 19:57:56.427712 env[1137]: time="2023-10-02T19:57:56.427698157Z" level=info msg="cleaning up dead shim" Oct 2 19:57:56.437045 env[1137]: time="2023-10-02T19:57:56.436982632Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1933 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:56.437319 env[1137]: time="2023-10-02T19:57:56.437253551Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:56.438522 env[1137]: time="2023-10-02T19:57:56.438417031Z" level=error msg="Failed to pipe stderr of container \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\"" error="reading from a closed fifo" Oct 2 19:57:56.439638 env[1137]: time="2023-10-02T19:57:56.439592070Z" level=error msg="Failed to pipe stdout of container \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\"" error="reading from a closed fifo" Oct 2 19:57:56.441341 env[1137]: time="2023-10-02T19:57:56.441298549Z" level=error msg="StartContainer for \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:56.441603 kubelet[1437]: E1002 19:57:56.441565 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6" Oct 2 19:57:56.441788 kubelet[1437]: E1002 19:57:56.441667 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:56.441788 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:56.441788 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:57:56.441788 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hrnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:56.441788 kubelet[1437]: E1002 19:57:56.441706 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:56.555581 kubelet[1437]: I1002 19:57:56.554896 1437 scope.go:115] "RemoveContainer" containerID="aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b" Oct 2 19:57:56.555581 kubelet[1437]: I1002 19:57:56.555247 1437 scope.go:115] "RemoveContainer" containerID="aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b" Oct 2 19:57:56.557357 env[1137]: time="2023-10-02T19:57:56.557247837Z" level=info msg="RemoveContainer for \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\"" Oct 2 19:57:56.557661 env[1137]: time="2023-10-02T19:57:56.557534077Z" level=info msg="RemoveContainer for \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\"" Oct 2 19:57:56.557661 env[1137]: time="2023-10-02T19:57:56.557622557Z" level=error msg="RemoveContainer for \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\" failed" error="failed to set removing state for container \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\": container is already in removing state" Oct 2 19:57:56.557882 kubelet[1437]: E1002 19:57:56.557790 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\": container is already in removing state" containerID="aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b" Oct 2 19:57:56.557882 kubelet[1437]: I1002 19:57:56.557829 1437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b} err="rpc error: code = Unknown desc = failed to set removing state for container \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\": container is already in removing state" Oct 2 19:57:56.665400 env[1137]: time="2023-10-02T19:57:56.665357330Z" level=info msg="RemoveContainer for \"aca1bda73b76627dba6d64eff34070d6676ab64942bfea9456359866fe269a6b\" returns successfully" Oct 2 19:57:56.665658 kubelet[1437]: E1002 19:57:56.665633 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:56.665894 kubelet[1437]: E1002 19:57:56.665882 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:57:57.248076 kubelet[1437]: E1002 19:57:57.248007 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.365278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6-rootfs.mount: Deactivated successfully. Oct 2 19:57:58.248951 kubelet[1437]: E1002 19:57:58.248914 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.250103 kubelet[1437]: E1002 19:57:59.250055 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.530970 kubelet[1437]: W1002 19:57:59.530846 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice/cri-containerd-8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6.scope WatchSource:0}: task 8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6 not found: not found Oct 2 19:58:00.250570 kubelet[1437]: E1002 19:58:00.250538 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:01.251239 kubelet[1437]: E1002 19:58:01.251179 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.252210 kubelet[1437]: E1002 19:58:02.252150 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:03.252836 kubelet[1437]: E1002 19:58:03.252779 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:04.253840 kubelet[1437]: E1002 19:58:04.253794 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.254475 kubelet[1437]: E1002 19:58:05.254444 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:06.255102 kubelet[1437]: E1002 19:58:06.255028 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.255765 kubelet[1437]: E1002 19:58:07.255722 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:08.256119 kubelet[1437]: E1002 19:58:08.256055 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:08.357112 kubelet[1437]: E1002 19:58:08.357056 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:08.357303 kubelet[1437]: E1002 19:58:08.357277 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:58:09.256685 kubelet[1437]: E1002 19:58:09.256648 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:10.257647 kubelet[1437]: E1002 19:58:10.257597 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.258030 kubelet[1437]: E1002 19:58:11.257983 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.259057 kubelet[1437]: E1002 19:58:12.258987 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:13.169514 kubelet[1437]: E1002 19:58:13.169479 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:13.203064 kubelet[1437]: E1002 19:58:13.203029 1437 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:58:13.259531 kubelet[1437]: E1002 19:58:13.259492 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:13.492203 kubelet[1437]: E1002 19:58:13.492175 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:14.260119 kubelet[1437]: E1002 19:58:14.260077 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:15.260755 kubelet[1437]: E1002 19:58:15.260714 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:16.261126 kubelet[1437]: E1002 19:58:16.261087 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.261419 kubelet[1437]: E1002 19:58:17.261356 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:18.261620 kubelet[1437]: E1002 19:58:18.261584 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:18.492803 kubelet[1437]: E1002 19:58:18.492777 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:19.262579 kubelet[1437]: E1002 19:58:19.262536 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:20.263544 kubelet[1437]: E1002 19:58:20.263483 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:21.263776 kubelet[1437]: E1002 19:58:21.263720 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.264508 kubelet[1437]: E1002 19:58:22.264473 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:23.265877 kubelet[1437]: E1002 19:58:23.265826 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:23.357875 kubelet[1437]: E1002 19:58:23.357842 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:23.358112 kubelet[1437]: E1002 19:58:23.358095 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:58:23.493981 kubelet[1437]: E1002 19:58:23.493952 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:24.266807 kubelet[1437]: E1002 19:58:24.266765 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:25.267813 kubelet[1437]: E1002 19:58:25.267759 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:26.268002 kubelet[1437]: E1002 19:58:26.267955 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.270382 kubelet[1437]: E1002 19:58:27.270324 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:28.270940 kubelet[1437]: E1002 19:58:28.270903 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:28.494604 kubelet[1437]: E1002 19:58:28.494575 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:29.271961 kubelet[1437]: E1002 19:58:29.271920 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:30.272903 kubelet[1437]: E1002 19:58:30.272835 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.273292 kubelet[1437]: E1002 19:58:31.273255 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.274527 kubelet[1437]: E1002 19:58:32.274479 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:33.170109 kubelet[1437]: E1002 19:58:33.170073 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:33.275646 kubelet[1437]: E1002 19:58:33.275605 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:33.495583 kubelet[1437]: E1002 19:58:33.495535 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:34.275968 kubelet[1437]: E1002 19:58:34.275906 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:35.276954 kubelet[1437]: E1002 19:58:35.276897 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.282290 kubelet[1437]: E1002 19:58:36.282238 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.358110 kubelet[1437]: E1002 19:58:36.358058 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:36.358311 kubelet[1437]: E1002 19:58:36.358282 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:58:37.283442 kubelet[1437]: E1002 19:58:37.283378 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:38.284315 kubelet[1437]: E1002 19:58:38.284270 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:38.496279 kubelet[1437]: E1002 19:58:38.496234 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:39.285184 kubelet[1437]: E1002 19:58:39.285140 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:40.285738 kubelet[1437]: E1002 19:58:40.285701 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.286916 kubelet[1437]: E1002 19:58:41.286868 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.287602 kubelet[1437]: E1002 19:58:42.287544 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:43.288341 kubelet[1437]: E1002 19:58:43.288290 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:43.497307 kubelet[1437]: E1002 19:58:43.497271 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:44.289028 kubelet[1437]: E1002 19:58:44.288984 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:45.289419 kubelet[1437]: E1002 19:58:45.289376 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:46.289692 kubelet[1437]: E1002 19:58:46.289658 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.290948 kubelet[1437]: E1002 19:58:47.290904 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:48.291858 kubelet[1437]: E1002 19:58:48.291807 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:48.357854 kubelet[1437]: E1002 19:58:48.357820 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:48.358110 kubelet[1437]: E1002 19:58:48.358090 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:58:48.497997 kubelet[1437]: E1002 19:58:48.497955 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:49.292920 kubelet[1437]: E1002 19:58:49.292871 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.293202 kubelet[1437]: E1002 19:58:50.293164 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.357247 kubelet[1437]: E1002 19:58:50.357192 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:51.294421 kubelet[1437]: E1002 19:58:51.294381 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.295535 kubelet[1437]: E1002 19:58:52.295497 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:53.169578 kubelet[1437]: E1002 19:58:53.169541 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:53.296081 kubelet[1437]: E1002 19:58:53.296003 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:53.503415 kubelet[1437]: E1002 19:58:53.503387 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:54.300109 kubelet[1437]: E1002 19:58:54.296830 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.297546 kubelet[1437]: E1002 19:58:55.297484 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:56.300932 kubelet[1437]: E1002 19:58:56.298056 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.301623 kubelet[1437]: E1002 19:58:57.301574 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:58.301879 kubelet[1437]: E1002 19:58:58.301809 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:58.504519 kubelet[1437]: E1002 19:58:58.504483 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:59.302996 kubelet[1437]: E1002 19:58:59.302928 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.357652 kubelet[1437]: E1002 19:58:59.357604 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:59.357856 kubelet[1437]: E1002 19:58:59.357834 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:59:00.303888 kubelet[1437]: E1002 19:59:00.303822 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:01.304688 kubelet[1437]: E1002 19:59:01.304621 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.305633 kubelet[1437]: E1002 19:59:02.305561 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:03.306482 kubelet[1437]: E1002 19:59:03.306410 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:03.505778 kubelet[1437]: E1002 19:59:03.505750 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:04.307586 kubelet[1437]: E1002 19:59:04.307508 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:05.307958 kubelet[1437]: E1002 19:59:05.307916 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:06.308430 kubelet[1437]: E1002 19:59:06.308386 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.309500 kubelet[1437]: E1002 19:59:07.309448 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:08.310033 kubelet[1437]: E1002 19:59:08.309977 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:08.506873 kubelet[1437]: E1002 19:59:08.506836 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:09.310641 kubelet[1437]: E1002 19:59:09.310605 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:10.312099 kubelet[1437]: E1002 19:59:10.312004 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.312272 kubelet[1437]: E1002 19:59:11.312218 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.357640 kubelet[1437]: E1002 19:59:11.357613 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:11.358163 kubelet[1437]: E1002 19:59:11.358146 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-58xg8_kube-system(6c7a3bb7-28ad-402c-9d68-8da6312464cd)\"" pod="kube-system/cilium-58xg8" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd Oct 2 19:59:12.312448 kubelet[1437]: E1002 19:59:12.312394 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.169413 kubelet[1437]: E1002 19:59:13.169379 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.313330 kubelet[1437]: E1002 19:59:13.313282 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.507589 kubelet[1437]: E1002 19:59:13.507565 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:14.314005 kubelet[1437]: E1002 19:59:14.313958 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.314688 kubelet[1437]: E1002 19:59:15.314651 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:16.315600 kubelet[1437]: E1002 19:59:16.315565 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.316806 kubelet[1437]: E1002 19:59:17.316731 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:18.317936 kubelet[1437]: E1002 19:59:18.317867 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:18.508808 kubelet[1437]: E1002 19:59:18.508775 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:19.318409 kubelet[1437]: E1002 19:59:19.318364 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:20.318824 kubelet[1437]: E1002 19:59:20.318763 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.122120 env[1137]: time="2023-10-02T19:59:21.122071175Z" level=info msg="StopPodSandbox for \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\"" Oct 2 19:59:21.122468 env[1137]: time="2023-10-02T19:59:21.122137936Z" level=info msg="Container to stop \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:21.124639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417-shm.mount: Deactivated successfully. Oct 2 19:59:21.130653 systemd[1]: cri-containerd-4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417.scope: Deactivated successfully. Oct 2 19:59:21.129000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:59:21.131438 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:59:21.131525 kernel: audit: type=1334 audit(1696276761.129:651): prog-id=67 op=UNLOAD Oct 2 19:59:21.134000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:59:21.136032 kernel: audit: type=1334 audit(1696276761.134:652): prog-id=71 op=UNLOAD Oct 2 19:59:21.149962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417-rootfs.mount: Deactivated successfully. Oct 2 19:59:21.154734 env[1137]: time="2023-10-02T19:59:21.154682363Z" level=info msg="shim disconnected" id=4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417 Oct 2 19:59:21.155471 env[1137]: time="2023-10-02T19:59:21.155437898Z" level=warning msg="cleaning up after shim disconnected" id=4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417 namespace=k8s.io Oct 2 19:59:21.155580 env[1137]: time="2023-10-02T19:59:21.155565181Z" level=info msg="cleaning up dead shim" Oct 2 19:59:21.164297 env[1137]: time="2023-10-02T19:59:21.164252479Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1971 runtime=io.containerd.runc.v2\n" Oct 2 19:59:21.164764 env[1137]: time="2023-10-02T19:59:21.164732289Z" level=info msg="TearDown network for sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" successfully" Oct 2 19:59:21.165311 env[1137]: time="2023-10-02T19:59:21.164853491Z" level=info msg="StopPodSandbox for \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" returns successfully" Oct 2 19:59:21.280691 kubelet[1437]: I1002 19:59:21.280648 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cni-path\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280708 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-config-path\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280734 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-xtables-lock\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280759 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hubble-tls\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280782 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hostproc\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280778 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280802 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-run\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280821 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-bpf-maps\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280838 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-etc-cni-netd\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.280877 kubelet[1437]: I1002 19:59:21.280856 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-cgroup\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.280904 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c7a3bb7-28ad-402c-9d68-8da6312464cd-clustermesh-secrets\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.280922 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-kernel\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.280939 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-net\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.280960 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrnf6\" (UniqueName: \"kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-kube-api-access-hrnf6\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.280977 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-lib-modules\") pod \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\" (UID: \"6c7a3bb7-28ad-402c-9d68-8da6312464cd\") " Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.280997 1437 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.281038 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.281061 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.281079 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.281093 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.281107 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281127 kubelet[1437]: I1002 19:59:21.281121 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281400 kubelet[1437]: I1002 19:59:21.281320 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.281400 kubelet[1437]: I1002 19:59:21.281341 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.282074 kubelet[1437]: I1002 19:59:21.281492 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:21.282074 kubelet[1437]: W1002 19:59:21.281633 1437 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6c7a3bb7-28ad-402c-9d68-8da6312464cd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:21.283690 kubelet[1437]: I1002 19:59:21.283659 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:21.284777 systemd[1]: var-lib-kubelet-pods-6c7a3bb7\x2d28ad\x2d402c\x2d9d68\x2d8da6312464cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrnf6.mount: Deactivated successfully. Oct 2 19:59:21.284864 systemd[1]: var-lib-kubelet-pods-6c7a3bb7\x2d28ad\x2d402c\x2d9d68\x2d8da6312464cd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:21.284918 systemd[1]: var-lib-kubelet-pods-6c7a3bb7\x2d28ad\x2d402c\x2d9d68\x2d8da6312464cd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:21.286620 kubelet[1437]: I1002 19:59:21.286564 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7a3bb7-28ad-402c-9d68-8da6312464cd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:21.287026 kubelet[1437]: I1002 19:59:21.286988 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:21.287069 kubelet[1437]: I1002 19:59:21.287045 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-kube-api-access-hrnf6" (OuterVolumeSpecName: "kube-api-access-hrnf6") pod "6c7a3bb7-28ad-402c-9d68-8da6312464cd" (UID: "6c7a3bb7-28ad-402c-9d68-8da6312464cd"). InnerVolumeSpecName "kube-api-access-hrnf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:21.319105 kubelet[1437]: E1002 19:59:21.319061 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.362426 systemd[1]: Removed slice kubepods-burstable-pod6c7a3bb7_28ad_402c_9d68_8da6312464cd.slice. Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382018 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382047 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382058 1437 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382067 1437 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382076 1437 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382085 1437 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382094 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382103 1437 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c7a3bb7-28ad-402c-9d68-8da6312464cd-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382107 kubelet[1437]: I1002 19:59:21.382113 1437 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382376 kubelet[1437]: I1002 19:59:21.382134 1437 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382376 kubelet[1437]: I1002 19:59:21.382146 1437 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hrnf6\" (UniqueName: \"kubernetes.io/projected/6c7a3bb7-28ad-402c-9d68-8da6312464cd-kube-api-access-hrnf6\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382376 kubelet[1437]: I1002 19:59:21.382155 1437 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.382376 kubelet[1437]: I1002 19:59:21.382164 1437 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c7a3bb7-28ad-402c-9d68-8da6312464cd-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:59:21.683969 kubelet[1437]: I1002 19:59:21.683865 1437 scope.go:115] "RemoveContainer" containerID="8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6" Oct 2 19:59:21.685439 env[1137]: time="2023-10-02T19:59:21.685396354Z" level=info msg="RemoveContainer for \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\"" Oct 2 19:59:21.687802 env[1137]: time="2023-10-02T19:59:21.687749442Z" level=info msg="RemoveContainer for \"8ca2fed731b91346f521334bf3acfed3f2a7557e107d4943f9b812e577d3e0e6\" returns successfully" Oct 2 19:59:22.319809 kubelet[1437]: E1002 19:59:22.319756 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:23.320756 kubelet[1437]: E1002 19:59:23.320702 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:23.359283 kubelet[1437]: I1002 19:59:23.359250 1437 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6c7a3bb7-28ad-402c-9d68-8da6312464cd path="/var/lib/kubelet/pods/6c7a3bb7-28ad-402c-9d68-8da6312464cd/volumes" Oct 2 19:59:23.509944 kubelet[1437]: E1002 19:59:23.509917 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:24.321730 kubelet[1437]: E1002 19:59:24.321684 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.759197 kubelet[1437]: I1002 19:59:24.759152 1437 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:59:24.759197 kubelet[1437]: E1002 19:59:24.759200 1437 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.759197 kubelet[1437]: E1002 19:59:24.759210 1437 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.759399 kubelet[1437]: E1002 19:59:24.759217 1437 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.759399 kubelet[1437]: I1002 19:59:24.759232 1437 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.759399 kubelet[1437]: I1002 19:59:24.759238 1437 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.759399 kubelet[1437]: I1002 19:59:24.759244 1437 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.759399 kubelet[1437]: I1002 19:59:24.759250 1437 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.763998 systemd[1]: Created slice kubepods-besteffort-podd0e126f3_9a6a_4fcd_8aa6_f28da02596d9.slice. Oct 2 19:59:24.779008 kubelet[1437]: I1002 19:59:24.778972 1437 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:59:24.779259 kubelet[1437]: E1002 19:59:24.779244 1437 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.779341 kubelet[1437]: I1002 19:59:24.779330 1437 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.779412 kubelet[1437]: E1002 19:59:24.779402 1437 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c7a3bb7-28ad-402c-9d68-8da6312464cd" containerName="mount-cgroup" Oct 2 19:59:24.783675 systemd[1]: Created slice kubepods-burstable-pod3c2247a5_4919_4545_aa9f_e803da752887.slice. Oct 2 19:59:24.800313 kubelet[1437]: I1002 19:59:24.800273 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-xtables-lock\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800313 kubelet[1437]: I1002 19:59:24.800318 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-clustermesh-secrets\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800348 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c2247a5-4919-4545-aa9f-e803da752887-cilium-config-path\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800369 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-cilium-ipsec-secrets\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800389 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-net\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800414 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-kernel\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800434 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kfc\" (UniqueName: \"kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-kube-api-access-26kfc\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800454 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-hostproc\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800482 kubelet[1437]: I1002 19:59:24.800475 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-run\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800503 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-bpf-maps\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800521 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-etc-cni-netd\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800548 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-cilium-config-path\") pod \"cilium-operator-574c4bb98d-ghsgt\" (UID: \"d0e126f3-9a6a-4fcd-8aa6-f28da02596d9\") " pod="kube-system/cilium-operator-574c4bb98d-ghsgt" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800578 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wttns\" (UniqueName: \"kubernetes.io/projected/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-kube-api-access-wttns\") pod \"cilium-operator-574c4bb98d-ghsgt\" (UID: \"d0e126f3-9a6a-4fcd-8aa6-f28da02596d9\") " pod="kube-system/cilium-operator-574c4bb98d-ghsgt" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800599 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cni-path\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800616 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-lib-modules\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800659 kubelet[1437]: I1002 19:59:24.800645 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-hubble-tls\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:24.800838 kubelet[1437]: I1002 19:59:24.800667 1437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-cgroup\") pod \"cilium-pxz7t\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " pod="kube-system/cilium-pxz7t" Oct 2 19:59:25.067474 kubelet[1437]: E1002 19:59:25.066631 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:25.067759 env[1137]: time="2023-10-02T19:59:25.067106592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-ghsgt,Uid:d0e126f3-9a6a-4fcd-8aa6-f28da02596d9,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:25.081245 env[1137]: time="2023-10-02T19:59:25.081159842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:25.081245 env[1137]: time="2023-10-02T19:59:25.081209483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:25.081245 env[1137]: time="2023-10-02T19:59:25.081220244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:25.081655 env[1137]: time="2023-10-02T19:59:25.081605612Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9 pid=2000 runtime=io.containerd.runc.v2 Oct 2 19:59:25.092615 systemd[1]: Started cri-containerd-38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9.scope. Oct 2 19:59:25.098309 kubelet[1437]: E1002 19:59:25.097802 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:25.098734 env[1137]: time="2023-10-02T19:59:25.098700005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxz7t,Uid:3c2247a5-4919-4545-aa9f-e803da752887,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:25.117764 env[1137]: time="2023-10-02T19:59:25.116998463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:25.117764 env[1137]: time="2023-10-02T19:59:25.117046504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:25.117764 env[1137]: time="2023-10-02T19:59:25.117057704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:25.117764 env[1137]: time="2023-10-02T19:59:25.117174067Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf pid=2029 runtime=io.containerd.runc.v2 Oct 2 19:59:25.129471 systemd[1]: Started cri-containerd-3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf.scope. Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158741 kernel: audit: type=1400 audit(1696276765.153:653): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158807 kernel: audit: type=1400 audit(1696276765.153:654): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158826 kernel: audit: type=1400 audit(1696276765.153:655): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.163038 kernel: audit: type=1400 audit(1696276765.153:656): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.163108 kernel: audit: type=1400 audit(1696276765.153:657): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166484 kernel: audit: type=1400 audit(1696276765.153:658): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168332 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:59:25.168382 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.153000 audit: BPF prog-id=78 op=LOAD Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2029 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362626631323033373131613165646361653162653733316563373164 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2029 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362626631323033373131613165646361653162653733316563373164 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.154000 audit: BPF prog-id=79 op=LOAD Oct 2 19:59:25.154000 audit[2043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2029 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362626631323033373131613165646361653162653733316563373164 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.156000 audit: BPF prog-id=80 op=LOAD Oct 2 19:59:25.156000 audit[2043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2029 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362626631323033373131613165646361653162653733316563373164 Oct 2 19:59:25.158000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:59:25.158000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { perfmon } for pid=2043 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit[2043]: AVC avc: denied { bpf } for pid=2043 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.158000 audit: BPF prog-id=81 op=LOAD Oct 2 19:59:25.158000 audit[2043]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2029 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362626631323033373131613165646361653162653733316563373164 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.166000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit: BPF prog-id=83 op=LOAD Oct 2 19:59:25.168000 audit[2010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2000 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338626131633535366132326566333561363836346635363132633034 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit: BPF prog-id=84 op=LOAD Oct 2 19:59:25.168000 audit[2010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2000 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338626131633535366132326566333561363836346635363132633034 Oct 2 19:59:25.168000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:59:25.168000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { perfmon } for pid=2010 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit[2010]: AVC avc: denied { bpf } for pid=2010 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:25.168000 audit: BPF prog-id=85 op=LOAD Oct 2 19:59:25.168000 audit[2010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2000 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:25.168000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338626131633535366132326566333561363836346635363132633034 Oct 2 19:59:25.184128 env[1137]: time="2023-10-02T19:59:25.183741242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxz7t,Uid:3c2247a5-4919-4545-aa9f-e803da752887,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\"" Oct 2 19:59:25.185300 kubelet[1437]: E1002 19:59:25.185278 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:25.187266 env[1137]: time="2023-10-02T19:59:25.187204634Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:59:25.197468 env[1137]: time="2023-10-02T19:59:25.197426445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-ghsgt,Uid:d0e126f3-9a6a-4fcd-8aa6-f28da02596d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9\"" Oct 2 19:59:25.198785 kubelet[1437]: E1002 19:59:25.198083 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:25.198881 env[1137]: time="2023-10-02T19:59:25.198180341Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\"" Oct 2 19:59:25.199155 env[1137]: time="2023-10-02T19:59:25.199030438Z" level=info msg="StartContainer for \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\"" Oct 2 19:59:25.199393 env[1137]: time="2023-10-02T19:59:25.199361845Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:59:25.216191 systemd[1]: Started cri-containerd-bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057.scope. Oct 2 19:59:25.240263 systemd[1]: cri-containerd-bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057.scope: Deactivated successfully. Oct 2 19:59:25.261309 env[1137]: time="2023-10-02T19:59:25.261099081Z" level=info msg="shim disconnected" id=bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057 Oct 2 19:59:25.261309 env[1137]: time="2023-10-02T19:59:25.261151162Z" level=warning msg="cleaning up after shim disconnected" id=bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057 namespace=k8s.io Oct 2 19:59:25.261309 env[1137]: time="2023-10-02T19:59:25.261160722Z" level=info msg="cleaning up dead shim" Oct 2 19:59:25.270730 env[1137]: time="2023-10-02T19:59:25.270676159Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2099 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:25.270975 env[1137]: time="2023-10-02T19:59:25.270916604Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:59:25.271146 env[1137]: time="2023-10-02T19:59:25.271106888Z" level=error msg="Failed to pipe stdout of container \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\"" error="reading from a closed fifo" Oct 2 19:59:25.271248 env[1137]: time="2023-10-02T19:59:25.271156649Z" level=error msg="Failed to pipe stderr of container \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\"" error="reading from a closed fifo" Oct 2 19:59:25.272798 env[1137]: time="2023-10-02T19:59:25.272680600Z" level=error msg="StartContainer for \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:25.273010 kubelet[1437]: E1002 19:59:25.272927 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057" Oct 2 19:59:25.273096 kubelet[1437]: E1002 19:59:25.273051 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:25.273096 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:25.273096 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:59:25.273096 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-26kfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:25.273096 kubelet[1437]: E1002 19:59:25.273091 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 19:59:25.321944 kubelet[1437]: E1002 19:59:25.321836 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:25.693036 kubelet[1437]: E1002 19:59:25.692468 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:25.703923 env[1137]: time="2023-10-02T19:59:25.703878871Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:59:25.723428 env[1137]: time="2023-10-02T19:59:25.723365953Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\"" Oct 2 19:59:25.724247 env[1137]: time="2023-10-02T19:59:25.724202651Z" level=info msg="StartContainer for \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\"" Oct 2 19:59:25.748764 systemd[1]: Started cri-containerd-43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55.scope. Oct 2 19:59:25.776440 systemd[1]: cri-containerd-43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55.scope: Deactivated successfully. Oct 2 19:59:25.783337 env[1137]: time="2023-10-02T19:59:25.783287032Z" level=info msg="shim disconnected" id=43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55 Oct 2 19:59:25.783337 env[1137]: time="2023-10-02T19:59:25.783336993Z" level=warning msg="cleaning up after shim disconnected" id=43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55 namespace=k8s.io Oct 2 19:59:25.783584 env[1137]: time="2023-10-02T19:59:25.783347713Z" level=info msg="cleaning up dead shim" Oct 2 19:59:25.791420 env[1137]: time="2023-10-02T19:59:25.791359599Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2135 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:25.791664 env[1137]: time="2023-10-02T19:59:25.791602164Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:59:25.791843 env[1137]: time="2023-10-02T19:59:25.791796368Z" level=error msg="Failed to pipe stdout of container \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\"" error="reading from a closed fifo" Oct 2 19:59:25.791889 env[1137]: time="2023-10-02T19:59:25.791807048Z" level=error msg="Failed to pipe stderr of container \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\"" error="reading from a closed fifo" Oct 2 19:59:25.793264 env[1137]: time="2023-10-02T19:59:25.793224237Z" level=error msg="StartContainer for \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:25.793612 kubelet[1437]: E1002 19:59:25.793583 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55" Oct 2 19:59:25.793726 kubelet[1437]: E1002 19:59:25.793705 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:25.793726 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:25.793726 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:59:25.793726 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-26kfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:25.793869 kubelet[1437]: E1002 19:59:25.793742 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 19:59:26.023162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015853357.mount: Deactivated successfully. Oct 2 19:59:26.322891 kubelet[1437]: E1002 19:59:26.322784 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.496886 env[1137]: time="2023-10-02T19:59:26.496822598Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:26.498202 env[1137]: time="2023-10-02T19:59:26.498146385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:26.500439 env[1137]: time="2023-10-02T19:59:26.500404672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:26.501294 env[1137]: time="2023-10-02T19:59:26.501257169Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:59:26.510949 env[1137]: time="2023-10-02T19:59:26.510907209Z" level=info msg="CreateContainer within sandbox \"38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:59:26.520399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221677971.mount: Deactivated successfully. Oct 2 19:59:26.523875 env[1137]: time="2023-10-02T19:59:26.523839717Z" level=info msg="CreateContainer within sandbox \"38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\"" Oct 2 19:59:26.524559 env[1137]: time="2023-10-02T19:59:26.524530011Z" level=info msg="StartContainer for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\"" Oct 2 19:59:26.547645 systemd[1]: Started cri-containerd-c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963.scope. Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.581448 kernel: kauditd_printk_skb: 110 callbacks suppressed Oct 2 19:59:26.581521 kernel: audit: type=1400 audit(1696276766.577:685): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.581547 kernel: audit: type=1400 audit(1696276766.577:686): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.585498 kernel: audit: type=1400 audit(1696276766.577:687): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.585558 kernel: audit: type=1400 audit(1696276766.577:688): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.588922 kernel: audit: type=1400 audit(1696276766.577:689): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.588965 kernel: audit: type=1400 audit(1696276766.577:690): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.592312 kernel: audit: type=1400 audit(1696276766.577:691): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.592367 kernel: audit: type=1400 audit(1696276766.577:692): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.595626 kernel: audit: type=1400 audit(1696276766.577:693): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.595712 kernel: audit: type=1400 audit(1696276766.578:694): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.578000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.578000 audit: BPF prog-id=86 op=LOAD Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2000 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:26.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335313161366137396133613438666466356437393834636330666435 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2000 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:26.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335313161366137396133613438666466356437393834636330666435 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.580000 audit: BPF prog-id=87 op=LOAD Oct 2 19:59:26.580000 audit[2156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2000 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:26.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335313161366137396133613438666466356437393834636330666435 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.582000 audit: BPF prog-id=88 op=LOAD Oct 2 19:59:26.582000 audit[2156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2000 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:26.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335313161366137396133613438666466356437393834636330666435 Oct 2 19:59:26.584000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:59:26.584000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { perfmon } for pid=2156 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit[2156]: AVC avc: denied { bpf } for pid=2156 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:26.584000 audit: BPF prog-id=89 op=LOAD Oct 2 19:59:26.584000 audit[2156]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2000 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:26.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335313161366137396133613438666466356437393834636330666435 Oct 2 19:59:26.616576 env[1137]: time="2023-10-02T19:59:26.616521236Z" level=info msg="StartContainer for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" returns successfully" Oct 2 19:59:26.665000 audit[2167]: AVC avc: denied { map_create } for pid=2167 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c301,c319 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c301,c319 tclass=bpf permissive=0 Oct 2 19:59:26.665000 audit[2167]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=40005a7768 a2=48 a3=0 items=0 ppid=2000 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c301,c319 key=(null) Oct 2 19:59:26.665000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:59:26.698401 kubelet[1437]: I1002 19:59:26.698367 1437 scope.go:115] "RemoveContainer" containerID="bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057" Oct 2 19:59:26.699125 kubelet[1437]: I1002 19:59:26.698896 1437 scope.go:115] "RemoveContainer" containerID="bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057" Oct 2 19:59:26.699646 env[1137]: time="2023-10-02T19:59:26.699595116Z" level=info msg="RemoveContainer for \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\"" Oct 2 19:59:26.700082 env[1137]: time="2023-10-02T19:59:26.700057886Z" level=info msg="RemoveContainer for \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\"" Oct 2 19:59:26.701079 kubelet[1437]: E1002 19:59:26.700730 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:26.701177 env[1137]: time="2023-10-02T19:59:26.701144348Z" level=error msg="RemoveContainer for \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\" failed" error="failed to set removing state for container \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\": container is already in removing state" Oct 2 19:59:26.701668 kubelet[1437]: E1002 19:59:26.701322 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\": container is already in removing state" containerID="bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057" Oct 2 19:59:26.701668 kubelet[1437]: E1002 19:59:26.701359 1437 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057": container is already in removing state; Skipping pod "cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)" Oct 2 19:59:26.701668 kubelet[1437]: E1002 19:59:26.701413 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:26.701668 kubelet[1437]: E1002 19:59:26.701644 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 19:59:26.702462 env[1137]: time="2023-10-02T19:59:26.702230171Z" level=info msg="RemoveContainer for \"bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057\" returns successfully" Oct 2 19:59:27.323091 kubelet[1437]: E1002 19:59:27.323052 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.704101 kubelet[1437]: E1002 19:59:27.703901 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:28.323712 kubelet[1437]: E1002 19:59:28.323674 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:28.370601 kubelet[1437]: W1002 19:59:28.370539 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2247a5_4919_4545_aa9f_e803da752887.slice/cri-containerd-bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057.scope WatchSource:0}: container "bc2b439a24fceb8e6f3d98c2b269ed5f0c3fbc6fc7abf120febab48e11701057" in namespace "k8s.io": not found Oct 2 19:59:28.510931 kubelet[1437]: E1002 19:59:28.510905 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:29.324646 kubelet[1437]: E1002 19:59:29.324588 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:30.325506 kubelet[1437]: E1002 19:59:30.325461 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.326260 kubelet[1437]: E1002 19:59:31.326210 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.476798 kubelet[1437]: W1002 19:59:31.476762 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2247a5_4919_4545_aa9f_e803da752887.slice/cri-containerd-43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55.scope WatchSource:0}: task 43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55 not found: not found Oct 2 19:59:32.326728 kubelet[1437]: E1002 19:59:32.326687 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:33.169829 kubelet[1437]: E1002 19:59:33.169784 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:33.327539 kubelet[1437]: E1002 19:59:33.327505 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:33.512505 kubelet[1437]: E1002 19:59:33.512466 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:34.327915 kubelet[1437]: E1002 19:59:34.327859 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:35.328131 kubelet[1437]: E1002 19:59:35.328051 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.328419 kubelet[1437]: E1002 19:59:36.328364 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:37.329085 kubelet[1437]: E1002 19:59:37.329040 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.329904 kubelet[1437]: E1002 19:59:38.329849 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.513535 kubelet[1437]: E1002 19:59:38.513510 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:39.330430 kubelet[1437]: E1002 19:59:39.330372 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:40.331245 kubelet[1437]: E1002 19:59:40.331201 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.331827 kubelet[1437]: E1002 19:59:41.331762 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.357516 kubelet[1437]: E1002 19:59:41.357423 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:41.359623 env[1137]: time="2023-10-02T19:59:41.359586858Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:59:41.368044 env[1137]: time="2023-10-02T19:59:41.367972636Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\"" Oct 2 19:59:41.368653 env[1137]: time="2023-10-02T19:59:41.368618449Z" level=info msg="StartContainer for \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\"" Oct 2 19:59:41.381314 kubelet[1437]: I1002 19:59:41.381274 1437 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-ghsgt" podStartSLOduration=16.070790015 podCreationTimestamp="2023-10-02 19:59:24 +0000 UTC" firstStartedPulling="2023-10-02 19:59:25.198992037 +0000 UTC m=+192.509885175" lastFinishedPulling="2023-10-02 19:59:26.509399698 +0000 UTC m=+193.820292876" observedRunningTime="2023-10-02 19:59:26.730612439 +0000 UTC m=+194.041505617" watchObservedRunningTime="2023-10-02 19:59:41.381197716 +0000 UTC m=+208.692090854" Oct 2 19:59:41.389775 systemd[1]: run-containerd-runc-k8s.io-c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49-runc.g0jzit.mount: Deactivated successfully. Oct 2 19:59:41.393176 systemd[1]: Started cri-containerd-c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49.scope. Oct 2 19:59:41.411736 systemd[1]: cri-containerd-c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49.scope: Deactivated successfully. Oct 2 19:59:41.524211 env[1137]: time="2023-10-02T19:59:41.524158307Z" level=info msg="shim disconnected" id=c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49 Oct 2 19:59:41.524211 env[1137]: time="2023-10-02T19:59:41.524213068Z" level=warning msg="cleaning up after shim disconnected" id=c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49 namespace=k8s.io Oct 2 19:59:41.524489 env[1137]: time="2023-10-02T19:59:41.524224428Z" level=info msg="cleaning up dead shim" Oct 2 19:59:41.532473 env[1137]: time="2023-10-02T19:59:41.532417122Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2211 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:41.532753 env[1137]: time="2023-10-02T19:59:41.532689047Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:41.532926 env[1137]: time="2023-10-02T19:59:41.532880932Z" level=error msg="Failed to pipe stdout of container \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\"" error="reading from a closed fifo" Oct 2 19:59:41.532972 env[1137]: time="2023-10-02T19:59:41.532912972Z" level=error msg="Failed to pipe stderr of container \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\"" error="reading from a closed fifo" Oct 2 19:59:41.534634 env[1137]: time="2023-10-02T19:59:41.534591288Z" level=error msg="StartContainer for \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:41.534992 kubelet[1437]: E1002 19:59:41.534970 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49" Oct 2 19:59:41.535219 kubelet[1437]: E1002 19:59:41.535197 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:41.535219 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:41.535219 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 19:59:41.535219 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-26kfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:41.535449 kubelet[1437]: E1002 19:59:41.535426 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 19:59:41.727417 kubelet[1437]: I1002 19:59:41.727391 1437 scope.go:115] "RemoveContainer" containerID="43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55" Oct 2 19:59:41.727718 kubelet[1437]: I1002 19:59:41.727701 1437 scope.go:115] "RemoveContainer" containerID="43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55" Oct 2 19:59:41.728674 env[1137]: time="2023-10-02T19:59:41.728642401Z" level=info msg="RemoveContainer for \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\"" Oct 2 19:59:41.729104 env[1137]: time="2023-10-02T19:59:41.729009849Z" level=info msg="RemoveContainer for \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\"" Oct 2 19:59:41.729414 env[1137]: time="2023-10-02T19:59:41.729220054Z" level=error msg="RemoveContainer for \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\" failed" error="failed to set removing state for container \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\": container is already in removing state" Oct 2 19:59:41.729515 kubelet[1437]: E1002 19:59:41.729500 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\": container is already in removing state" containerID="43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55" Oct 2 19:59:41.729597 kubelet[1437]: E1002 19:59:41.729587 1437 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55": container is already in removing state; Skipping pod "cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)" Oct 2 19:59:41.729701 kubelet[1437]: E1002 19:59:41.729692 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:41.730085 kubelet[1437]: E1002 19:59:41.730065 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 19:59:41.731570 env[1137]: time="2023-10-02T19:59:41.731526463Z" level=info msg="RemoveContainer for \"43fcec25692e5904e34870ce9a4243a5f5de09444199f123b0458ef95dcbbe55\" returns successfully" Oct 2 19:59:42.332588 kubelet[1437]: E1002 19:59:42.332554 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.365714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49-rootfs.mount: Deactivated successfully. Oct 2 19:59:43.334092 kubelet[1437]: E1002 19:59:43.333999 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:43.514889 kubelet[1437]: E1002 19:59:43.514830 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:44.335144 kubelet[1437]: E1002 19:59:44.335084 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:44.629330 kubelet[1437]: W1002 19:59:44.629051 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2247a5_4919_4545_aa9f_e803da752887.slice/cri-containerd-c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49.scope WatchSource:0}: task c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49 not found: not found Oct 2 19:59:45.335852 kubelet[1437]: E1002 19:59:45.335800 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.336337 kubelet[1437]: E1002 19:59:46.336290 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.336711 kubelet[1437]: E1002 19:59:47.336647 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:48.337445 kubelet[1437]: E1002 19:59:48.337391 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:48.515648 kubelet[1437]: E1002 19:59:48.515625 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:49.338508 kubelet[1437]: E1002 19:59:49.338432 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:50.338612 kubelet[1437]: E1002 19:59:50.338537 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:51.338869 kubelet[1437]: E1002 19:59:51.338805 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.339935 kubelet[1437]: E1002 19:59:52.339872 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:53.170213 kubelet[1437]: E1002 19:59:53.170153 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:53.340924 kubelet[1437]: E1002 19:59:53.340863 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:53.357723 kubelet[1437]: E1002 19:59:53.357671 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:53.358132 kubelet[1437]: E1002 19:59:53.357887 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 19:59:53.517304 kubelet[1437]: E1002 19:59:53.517272 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:54.341111 kubelet[1437]: E1002 19:59:54.341069 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:55.342031 kubelet[1437]: E1002 19:59:55.341964 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:56.342417 kubelet[1437]: E1002 19:59:56.342380 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.342923 kubelet[1437]: E1002 19:59:57.342883 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.344289 kubelet[1437]: E1002 19:59:58.344238 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.518030 kubelet[1437]: E1002 19:59:58.517990 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:59.344676 kubelet[1437]: E1002 19:59:59.344643 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.346032 kubelet[1437]: E1002 20:00:00.345973 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:01.347240 kubelet[1437]: E1002 20:00:01.347155 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:02.347846 kubelet[1437]: E1002 20:00:02.347805 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.348888 kubelet[1437]: E1002 20:00:03.348832 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.519567 kubelet[1437]: E1002 20:00:03.519540 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:04.349404 kubelet[1437]: E1002 20:00:04.349356 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.350326 kubelet[1437]: E1002 20:00:05.350246 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.358523 kubelet[1437]: E1002 20:00:05.358486 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:05.361066 env[1137]: time="2023-10-02T20:00:05.361000792Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:00:05.370350 env[1137]: time="2023-10-02T20:00:05.370308640Z" level=info msg="CreateContainer within sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\"" Oct 2 20:00:05.370904 env[1137]: time="2023-10-02T20:00:05.370876154Z" level=info msg="StartContainer for \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\"" Oct 2 20:00:05.387897 systemd[1]: Started cri-containerd-a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06.scope. Oct 2 20:00:05.410733 systemd[1]: cri-containerd-a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06.scope: Deactivated successfully. Oct 2 20:00:05.414363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06-rootfs.mount: Deactivated successfully. Oct 2 20:00:05.429907 env[1137]: time="2023-10-02T20:00:05.429855249Z" level=info msg="shim disconnected" id=a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06 Oct 2 20:00:05.429907 env[1137]: time="2023-10-02T20:00:05.429909008Z" level=warning msg="cleaning up after shim disconnected" id=a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06 namespace=k8s.io Oct 2 20:00:05.430142 env[1137]: time="2023-10-02T20:00:05.429918048Z" level=info msg="cleaning up dead shim" Oct 2 20:00:05.439187 env[1137]: time="2023-10-02T20:00:05.439078299Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2254 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:05.439472 env[1137]: time="2023-10-02T20:00:05.439423575Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 20:00:05.440117 env[1137]: time="2023-10-02T20:00:05.440075607Z" level=error msg="Failed to pipe stdout of container \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\"" error="reading from a closed fifo" Oct 2 20:00:05.440174 env[1137]: time="2023-10-02T20:00:05.440146766Z" level=error msg="Failed to pipe stderr of container \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\"" error="reading from a closed fifo" Oct 2 20:00:05.441564 env[1137]: time="2023-10-02T20:00:05.441525550Z" level=error msg="StartContainer for \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:05.442312 kubelet[1437]: E1002 20:00:05.441877 1437 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06" Oct 2 20:00:05.442312 kubelet[1437]: E1002 20:00:05.441979 1437 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:05.442312 kubelet[1437]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:05.442312 kubelet[1437]: rm /hostbin/cilium-mount Oct 2 20:00:05.442312 kubelet[1437]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-26kfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:05.442312 kubelet[1437]: E1002 20:00:05.442026 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 20:00:05.769838 kubelet[1437]: I1002 20:00:05.769811 1437 scope.go:115] "RemoveContainer" containerID="c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49" Oct 2 20:00:05.770254 kubelet[1437]: I1002 20:00:05.770150 1437 scope.go:115] "RemoveContainer" containerID="c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49" Oct 2 20:00:05.771286 env[1137]: time="2023-10-02T20:00:05.771253331Z" level=info msg="RemoveContainer for \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\"" Oct 2 20:00:05.772364 env[1137]: time="2023-10-02T20:00:05.772038801Z" level=info msg="RemoveContainer for \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\"" Oct 2 20:00:05.772364 env[1137]: time="2023-10-02T20:00:05.772111960Z" level=error msg="RemoveContainer for \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\" failed" error="failed to set removing state for container \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\": container is already in removing state" Oct 2 20:00:05.772479 kubelet[1437]: E1002 20:00:05.772251 1437 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\": container is already in removing state" containerID="c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49" Oct 2 20:00:05.772479 kubelet[1437]: E1002 20:00:05.772284 1437 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49": container is already in removing state; Skipping pod "cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)" Oct 2 20:00:05.772479 kubelet[1437]: E1002 20:00:05.772343 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:05.772574 kubelet[1437]: E1002 20:00:05.772559 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 20:00:05.774956 env[1137]: time="2023-10-02T20:00:05.774913927Z" level=info msg="RemoveContainer for \"c028ce97ad25907d407ef8eb791ada184b15afc3937abad70ca045f0d363ef49\" returns successfully" Oct 2 20:00:06.350786 kubelet[1437]: E1002 20:00:06.350700 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.350859 kubelet[1437]: E1002 20:00:07.350810 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:08.350953 kubelet[1437]: E1002 20:00:08.350913 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:08.521262 kubelet[1437]: E1002 20:00:08.521237 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:08.533816 kubelet[1437]: W1002 20:00:08.533789 1437 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c2247a5_4919_4545_aa9f_e803da752887.slice/cri-containerd-a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06.scope WatchSource:0}: task a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06 not found: not found Oct 2 20:00:09.352099 kubelet[1437]: E1002 20:00:09.352058 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.353124 kubelet[1437]: E1002 20:00:10.353074 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.354110 kubelet[1437]: E1002 20:00:11.354079 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.354600 kubelet[1437]: E1002 20:00:12.354553 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.169843 kubelet[1437]: E1002 20:00:13.169801 1437 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.181849 env[1137]: time="2023-10-02T20:00:13.181808609Z" level=info msg="StopPodSandbox for \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\"" Oct 2 20:00:13.182181 env[1137]: time="2023-10-02T20:00:13.181890128Z" level=info msg="TearDown network for sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" successfully" Oct 2 20:00:13.182181 env[1137]: time="2023-10-02T20:00:13.181922368Z" level=info msg="StopPodSandbox for \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" returns successfully" Oct 2 20:00:13.182344 env[1137]: time="2023-10-02T20:00:13.182316805Z" level=info msg="RemovePodSandbox for \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\"" Oct 2 20:00:13.182394 env[1137]: time="2023-10-02T20:00:13.182347724Z" level=info msg="Forcibly stopping sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\"" Oct 2 20:00:13.182458 env[1137]: time="2023-10-02T20:00:13.182432004Z" level=info msg="TearDown network for sandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" successfully" Oct 2 20:00:13.185216 env[1137]: time="2023-10-02T20:00:13.185161260Z" level=info msg="RemovePodSandbox \"4eddbe9b906b9b1015cb9c3228fc473c60fc983c98e9db58e2c83a9d04c20417\" returns successfully" Oct 2 20:00:13.355382 kubelet[1437]: E1002 20:00:13.355351 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.522090 kubelet[1437]: E1002 20:00:13.522049 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:14.356838 kubelet[1437]: E1002 20:00:14.356801 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:14.357285 kubelet[1437]: E1002 20:00:14.357266 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:15.357824 kubelet[1437]: E1002 20:00:15.357789 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:16.359073 kubelet[1437]: E1002 20:00:16.359039 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.359987 kubelet[1437]: E1002 20:00:17.359952 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:18.360372 kubelet[1437]: E1002 20:00:18.360320 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:18.523570 kubelet[1437]: E1002 20:00:18.523546 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:19.360559 kubelet[1437]: E1002 20:00:19.360522 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:20.358053 kubelet[1437]: E1002 20:00:20.357995 1437 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:20.358449 kubelet[1437]: E1002 20:00:20.358431 1437 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pxz7t_kube-system(3c2247a5-4919-4545-aa9f-e803da752887)\"" pod="kube-system/cilium-pxz7t" podUID=3c2247a5-4919-4545-aa9f-e803da752887 Oct 2 20:00:20.361061 kubelet[1437]: E1002 20:00:20.361030 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.361400 kubelet[1437]: E1002 20:00:21.361362 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:22.362292 kubelet[1437]: E1002 20:00:22.362244 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:23.363185 kubelet[1437]: E1002 20:00:23.363161 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:23.524698 kubelet[1437]: E1002 20:00:23.524670 1437 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:24.364212 kubelet[1437]: E1002 20:00:24.364169 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.365152 kubelet[1437]: E1002 20:00:25.365123 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.953807 env[1137]: time="2023-10-02T20:00:25.953751918Z" level=info msg="StopPodSandbox for \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\"" Oct 2 20:00:25.954178 env[1137]: time="2023-10-02T20:00:25.953823558Z" level=info msg="Container to stop \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:25.955062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf-shm.mount: Deactivated successfully. Oct 2 20:00:25.961474 systemd[1]: cri-containerd-3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf.scope: Deactivated successfully. Oct 2 20:00:25.960000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:00:25.963171 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 20:00:25.963241 kernel: audit: type=1334 audit(1696276825.960:704): prog-id=78 op=UNLOAD Oct 2 20:00:25.967000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:00:25.968785 env[1137]: time="2023-10-02T20:00:25.968751098Z" level=info msg="StopContainer for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" with timeout 30 (s)" Oct 2 20:00:25.969138 env[1137]: time="2023-10-02T20:00:25.969093297Z" level=info msg="Stop container \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" with signal terminated" Oct 2 20:00:25.970047 kernel: audit: type=1334 audit(1696276825.967:705): prog-id=81 op=UNLOAD Oct 2 20:00:25.977344 systemd[1]: cri-containerd-c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963.scope: Deactivated successfully. Oct 2 20:00:25.976000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:00:25.979034 kernel: audit: type=1334 audit(1696276825.976:706): prog-id=86 op=UNLOAD Oct 2 20:00:25.981000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:00:25.983033 kernel: audit: type=1334 audit(1696276825.981:707): prog-id=89 op=UNLOAD Oct 2 20:00:25.987146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf-rootfs.mount: Deactivated successfully. Oct 2 20:00:25.993406 env[1137]: time="2023-10-02T20:00:25.993334840Z" level=info msg="shim disconnected" id=3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf Oct 2 20:00:25.993406 env[1137]: time="2023-10-02T20:00:25.993389600Z" level=warning msg="cleaning up after shim disconnected" id=3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf namespace=k8s.io Oct 2 20:00:25.993406 env[1137]: time="2023-10-02T20:00:25.993399360Z" level=info msg="cleaning up dead shim" Oct 2 20:00:25.999430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963-rootfs.mount: Deactivated successfully. Oct 2 20:00:26.000829 env[1137]: time="2023-10-02T20:00:26.000776771Z" level=info msg="shim disconnected" id=c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963 Oct 2 20:00:26.000829 env[1137]: time="2023-10-02T20:00:26.000826610Z" level=warning msg="cleaning up after shim disconnected" id=c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963 namespace=k8s.io Oct 2 20:00:26.000978 env[1137]: time="2023-10-02T20:00:26.000837610Z" level=info msg="cleaning up dead shim" Oct 2 20:00:26.004259 env[1137]: time="2023-10-02T20:00:26.004225838Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2306 runtime=io.containerd.runc.v2\n" Oct 2 20:00:26.004550 env[1137]: time="2023-10-02T20:00:26.004525197Z" level=info msg="TearDown network for sandbox \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" successfully" Oct 2 20:00:26.004609 env[1137]: time="2023-10-02T20:00:26.004551557Z" level=info msg="StopPodSandbox for \"3bbf1203711a1edcae1be731ec71d6e62261d0a9d6f9b4af8afa0f86634737bf\" returns successfully" Oct 2 20:00:26.010878 env[1137]: time="2023-10-02T20:00:26.010845174Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2316 runtime=io.containerd.runc.v2\n" Oct 2 20:00:26.012416 env[1137]: time="2023-10-02T20:00:26.012369048Z" level=info msg="StopContainer for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" returns successfully" Oct 2 20:00:26.012800 env[1137]: time="2023-10-02T20:00:26.012766287Z" level=info msg="StopPodSandbox for \"38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9\"" Oct 2 20:00:26.012960 env[1137]: time="2023-10-02T20:00:26.012938606Z" level=info msg="Container to stop \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:26.014124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9-shm.mount: Deactivated successfully. Oct 2 20:00:26.019793 systemd[1]: cri-containerd-38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9.scope: Deactivated successfully. Oct 2 20:00:26.019000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:00:26.021038 kernel: audit: type=1334 audit(1696276826.019:708): prog-id=82 op=UNLOAD Oct 2 20:00:26.022000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:00:26.024036 kernel: audit: type=1334 audit(1696276826.022:709): prog-id=85 op=UNLOAD Oct 2 20:00:26.041615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9-rootfs.mount: Deactivated successfully. Oct 2 20:00:26.046245 env[1137]: time="2023-10-02T20:00:26.046203084Z" level=info msg="shim disconnected" id=38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9 Oct 2 20:00:26.046393 env[1137]: time="2023-10-02T20:00:26.046253204Z" level=warning msg="cleaning up after shim disconnected" id=38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9 namespace=k8s.io Oct 2 20:00:26.046393 env[1137]: time="2023-10-02T20:00:26.046263684Z" level=info msg="cleaning up dead shim" Oct 2 20:00:26.053786 env[1137]: time="2023-10-02T20:00:26.053749177Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2349 runtime=io.containerd.runc.v2\n" Oct 2 20:00:26.054065 env[1137]: time="2023-10-02T20:00:26.054039376Z" level=info msg="TearDown network for sandbox \"38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9\" successfully" Oct 2 20:00:26.054129 env[1137]: time="2023-10-02T20:00:26.054066016Z" level=info msg="StopPodSandbox for \"38ba1c556a22ef35a6864f5612c0438f66b9447d6c19123ac58838f2fc1467f9\" returns successfully" Oct 2 20:00:26.199694 kubelet[1437]: I1002 20:00:26.199627 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-kernel\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199694 kubelet[1437]: I1002 20:00:26.199673 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-hostproc\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199694 kubelet[1437]: I1002 20:00:26.199691 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-run\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199716 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wttns\" (UniqueName: \"kubernetes.io/projected/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-kube-api-access-wttns\") pod \"d0e126f3-9a6a-4fcd-8aa6-f28da02596d9\" (UID: \"d0e126f3-9a6a-4fcd-8aa6-f28da02596d9\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199737 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-cgroup\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199734 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199770 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-clustermesh-secrets\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199791 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-net\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199742 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199811 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26kfc\" (UniqueName: \"kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-kube-api-access-26kfc\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199849 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-etc-cni-netd\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199904 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-xtables-lock\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.199944 kubelet[1437]: I1002 20:00:26.199933 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-cilium-ipsec-secrets\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.199993 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-hubble-tls\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200045 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200041 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c2247a5-4919-4545-aa9f-e803da752887-cilium-config-path\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200078 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200098 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-bpf-maps\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200126 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-cilium-config-path\") pod \"d0e126f3-9a6a-4fcd-8aa6-f28da02596d9\" (UID: \"d0e126f3-9a6a-4fcd-8aa6-f28da02596d9\") " Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200174 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cni-path\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.200231 kubelet[1437]: I1002 20:00:26.200199 1437 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-lib-modules\") pod \"3c2247a5-4919-4545-aa9f-e803da752887\" (UID: \"3c2247a5-4919-4545-aa9f-e803da752887\") " Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200256 1437 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200271 1437 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200282 1437 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200294 1437 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200303 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200328 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200355 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.200438 kubelet[1437]: I1002 20:00:26.200097 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.200657 kubelet[1437]: W1002 20:00:26.200583 1437 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:26.202752 kubelet[1437]: W1002 20:00:26.200718 1437 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3c2247a5-4919-4545-aa9f-e803da752887/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:26.202752 kubelet[1437]: I1002 20:00:26.202406 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.202752 kubelet[1437]: I1002 20:00:26.202442 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:26.202899 kubelet[1437]: I1002 20:00:26.202858 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2247a5-4919-4545-aa9f-e803da752887-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:26.203044 kubelet[1437]: I1002 20:00:26.203007 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-kube-api-access-26kfc" (OuterVolumeSpecName: "kube-api-access-26kfc") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "kube-api-access-26kfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:26.203711 kubelet[1437]: I1002 20:00:26.203678 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0e126f3-9a6a-4fcd-8aa6-f28da02596d9" (UID: "d0e126f3-9a6a-4fcd-8aa6-f28da02596d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:26.203784 kubelet[1437]: I1002 20:00:26.203701 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:26.205739 kubelet[1437]: I1002 20:00:26.205663 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-kube-api-access-wttns" (OuterVolumeSpecName: "kube-api-access-wttns") pod "d0e126f3-9a6a-4fcd-8aa6-f28da02596d9" (UID: "d0e126f3-9a6a-4fcd-8aa6-f28da02596d9"). InnerVolumeSpecName "kube-api-access-wttns". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:26.206000 kubelet[1437]: I1002 20:00:26.205976 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:26.207427 kubelet[1437]: I1002 20:00:26.207398 1437 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c2247a5-4919-4545-aa9f-e803da752887" (UID: "3c2247a5-4919-4545-aa9f-e803da752887"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:26.301448 kubelet[1437]: I1002 20:00:26.301411 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.301639 kubelet[1437]: I1002 20:00:26.301627 1437 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.301706 kubelet[1437]: I1002 20:00:26.301693 1437 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.301768 kubelet[1437]: I1002 20:00:26.301760 1437 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-26kfc\" (UniqueName: \"kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-kube-api-access-26kfc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.301837 kubelet[1437]: I1002 20:00:26.301828 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c2247a5-4919-4545-aa9f-e803da752887-cilium-ipsec-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.301915 kubelet[1437]: I1002 20:00:26.301905 1437 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.301971 kubelet[1437]: I1002 20:00:26.301962 1437 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c2247a5-4919-4545-aa9f-e803da752887-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.302069 kubelet[1437]: I1002 20:00:26.302056 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c2247a5-4919-4545-aa9f-e803da752887-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.302141 kubelet[1437]: I1002 20:00:26.302132 1437 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.302201 kubelet[1437]: I1002 20:00:26.302193 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.302261 kubelet[1437]: I1002 20:00:26.302252 1437 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.302319 kubelet[1437]: I1002 20:00:26.302310 1437 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c2247a5-4919-4545-aa9f-e803da752887-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.302380 kubelet[1437]: I1002 20:00:26.302371 1437 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wttns\" (UniqueName: \"kubernetes.io/projected/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9-kube-api-access-wttns\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:00:26.365861 kubelet[1437]: E1002 20:00:26.365825 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.806767 kubelet[1437]: I1002 20:00:26.806733 1437 scope.go:115] "RemoveContainer" containerID="a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06" Oct 2 20:00:26.807815 env[1137]: time="2023-10-02T20:00:26.807777580Z" level=info msg="RemoveContainer for \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\"" Oct 2 20:00:26.810144 env[1137]: time="2023-10-02T20:00:26.810109331Z" level=info msg="RemoveContainer for \"a2ec9bbf45e202b57988383f0eaf99a90f661e3377290d5fb2bc9d5e147b6e06\" returns successfully" Oct 2 20:00:26.810471 kubelet[1437]: I1002 20:00:26.810445 1437 scope.go:115] "RemoveContainer" containerID="c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963" Oct 2 20:00:26.813244 systemd[1]: Removed slice kubepods-burstable-pod3c2247a5_4919_4545_aa9f_e803da752887.slice. Oct 2 20:00:26.814712 systemd[1]: Removed slice kubepods-besteffort-podd0e126f3_9a6a_4fcd_8aa6_f28da02596d9.slice. Oct 2 20:00:26.815563 env[1137]: time="2023-10-02T20:00:26.815526991Z" level=info msg="RemoveContainer for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\"" Oct 2 20:00:26.817567 env[1137]: time="2023-10-02T20:00:26.817539184Z" level=info msg="RemoveContainer for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" returns successfully" Oct 2 20:00:26.817699 kubelet[1437]: I1002 20:00:26.817682 1437 scope.go:115] "RemoveContainer" containerID="c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963" Oct 2 20:00:26.817991 env[1137]: time="2023-10-02T20:00:26.817871863Z" level=error msg="ContainerStatus for \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\": not found" Oct 2 20:00:26.818089 kubelet[1437]: E1002 20:00:26.818029 1437 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\": not found" containerID="c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963" Oct 2 20:00:26.818089 kubelet[1437]: I1002 20:00:26.818055 1437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963} err="failed to get container status \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\": rpc error: code = NotFound desc = an error occurred when try to find container \"c511a6a79a3a48fdf5d7984cc0fd52f7bd5b36dcd2bafdb255827a3e47eb5963\": not found" Oct 2 20:00:26.955063 systemd[1]: var-lib-kubelet-pods-d0e126f3\x2d9a6a\x2d4fcd\x2d8aa6\x2df28da02596d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwttns.mount: Deactivated successfully. Oct 2 20:00:26.955163 systemd[1]: var-lib-kubelet-pods-3c2247a5\x2d4919\x2d4545\x2daa9f\x2de803da752887-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26kfc.mount: Deactivated successfully. Oct 2 20:00:26.955221 systemd[1]: var-lib-kubelet-pods-3c2247a5\x2d4919\x2d4545\x2daa9f\x2de803da752887-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:26.955271 systemd[1]: var-lib-kubelet-pods-3c2247a5\x2d4919\x2d4545\x2daa9f\x2de803da752887-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:26.955315 systemd[1]: var-lib-kubelet-pods-3c2247a5\x2d4919\x2d4545\x2daa9f\x2de803da752887-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:00:27.359738 kubelet[1437]: I1002 20:00:27.359701 1437 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3c2247a5-4919-4545-aa9f-e803da752887 path="/var/lib/kubelet/pods/3c2247a5-4919-4545-aa9f-e803da752887/volumes" Oct 2 20:00:27.360108 kubelet[1437]: I1002 20:00:27.360088 1437 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d0e126f3-9a6a-4fcd-8aa6-f28da02596d9 path="/var/lib/kubelet/pods/d0e126f3-9a6a-4fcd-8aa6-f28da02596d9/volumes" Oct 2 20:00:27.366602 kubelet[1437]: E1002 20:00:27.366570 1437 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"