Oct 2 19:43:14.768836 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:43:14.768857 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:43:14.768865 kernel: efi: EFI v2.70 by EDK II Oct 2 19:43:14.768871 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:43:14.768876 kernel: random: crng init done Oct 2 19:43:14.768881 kernel: ACPI: Early table checksum verification disabled Oct 2 19:43:14.768887 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:43:14.768894 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:43:14.768900 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768905 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768911 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768916 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768921 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768927 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768935 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768941 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768949 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:14.768955 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:43:14.768961 kernel: NUMA: Failed to initialise from firmware Oct 2 19:43:14.768967 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:43:14.768972 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Oct 2 19:43:14.768978 kernel: Zone ranges: Oct 2 19:43:14.768984 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:43:14.768990 kernel: DMA32 empty Oct 2 19:43:14.768996 kernel: Normal empty Oct 2 19:43:14.769001 kernel: Movable zone start for each node Oct 2 19:43:14.769007 kernel: Early memory node ranges Oct 2 19:43:14.769013 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:43:14.769019 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:43:14.769024 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:43:14.769030 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:43:14.769036 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:43:14.769042 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:43:14.769048 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:43:14.769053 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:43:14.769060 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:43:14.769066 kernel: psci: probing for conduit method from ACPI. Oct 2 19:43:14.769072 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:43:14.769078 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:43:14.769084 kernel: psci: Trusted OS migration not required Oct 2 19:43:14.769092 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:43:14.769098 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:43:14.769105 kernel: ACPI: SRAT not present Oct 2 19:43:14.769112 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:43:14.769118 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:43:14.769124 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:43:14.769130 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:43:14.769136 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:43:14.769142 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:43:14.769148 kernel: CPU features: detected: Spectre-v4 Oct 2 19:43:14.769154 kernel: CPU features: detected: Spectre-BHB Oct 2 19:43:14.769161 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:43:14.769167 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:43:14.769173 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:43:14.769179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:43:14.769185 kernel: Policy zone: DMA Oct 2 19:43:14.769192 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:43:14.769199 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:43:14.769205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:43:14.769211 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:43:14.769217 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:43:14.769223 kernel: Memory: 2459276K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113012K reserved, 0K cma-reserved) Oct 2 19:43:14.769231 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:43:14.769236 kernel: trace event string verifier disabled Oct 2 19:43:14.769242 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:43:14.769249 kernel: rcu: RCU event tracing is enabled. Oct 2 19:43:14.769255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:43:14.769261 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:43:14.769267 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:43:14.769273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:43:14.769279 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:43:14.769285 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:43:14.769291 kernel: GICv3: 256 SPIs implemented Oct 2 19:43:14.769298 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:43:14.769304 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:43:14.769310 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:43:14.769316 kernel: GICv3: 16 PPIs implemented Oct 2 19:43:14.769322 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:43:14.769328 kernel: ACPI: SRAT not present Oct 2 19:43:14.769333 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:43:14.769340 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:43:14.769346 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:43:14.769352 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:43:14.769358 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:43:14.769364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:14.769372 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:43:14.769378 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:43:14.769384 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:43:14.769390 kernel: arm-pv: using stolen time PV Oct 2 19:43:14.769396 kernel: Console: colour dummy device 80x25 Oct 2 19:43:14.769403 kernel: ACPI: Core revision 20210730 Oct 2 19:43:14.769409 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:43:14.769416 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:43:14.769422 kernel: LSM: Security Framework initializing Oct 2 19:43:14.769428 kernel: SELinux: Initializing. Oct 2 19:43:14.769446 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:43:14.769454 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:43:14.769460 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:43:14.769466 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:43:14.769472 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:43:14.769479 kernel: Remapping and enabling EFI services. Oct 2 19:43:14.769485 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:43:14.769491 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:43:14.769498 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:43:14.769505 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:43:14.769512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:14.769518 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:43:14.769525 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:43:14.769531 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:43:14.769537 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:43:14.769544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:14.769550 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:43:14.769556 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:43:14.769562 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:43:14.769570 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:43:14.769576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:14.769612 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:43:14.769620 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:43:14.769631 kernel: SMP: Total of 4 processors activated. Oct 2 19:43:14.769639 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:43:14.769645 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:43:14.769652 kernel: CPU features: detected: Common not Private translations Oct 2 19:43:14.769658 kernel: CPU features: detected: CRC32 instructions Oct 2 19:43:14.769665 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:43:14.769671 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:43:14.769678 kernel: CPU features: detected: Privileged Access Never Oct 2 19:43:14.769686 kernel: CPU features: detected: RAS Extension Support Oct 2 19:43:14.769692 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:43:14.769699 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:43:14.769705 kernel: alternatives: patching kernel code Oct 2 19:43:14.769713 kernel: devtmpfs: initialized Oct 2 19:43:14.769719 kernel: KASLR enabled Oct 2 19:43:14.769726 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:43:14.769733 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:43:14.769739 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:43:14.769745 kernel: SMBIOS 3.0.0 present. Oct 2 19:43:14.769752 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:43:14.769759 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:43:14.769765 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:43:14.769772 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:43:14.769780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:43:14.769786 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:43:14.769793 kernel: audit: type=2000 audit(0.043:1): state=initialized audit_enabled=0 res=1 Oct 2 19:43:14.769799 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:43:14.769806 kernel: cpuidle: using governor menu Oct 2 19:43:14.769812 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:43:14.769819 kernel: ASID allocator initialised with 32768 entries Oct 2 19:43:14.769825 kernel: ACPI: bus type PCI registered Oct 2 19:43:14.769832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:43:14.769840 kernel: Serial: AMBA PL011 UART driver Oct 2 19:43:14.769847 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:43:14.769853 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:43:14.769860 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:43:14.769866 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:43:14.769873 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:43:14.769880 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:43:14.769886 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:43:14.769893 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:43:14.769900 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:43:14.769907 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:43:14.769913 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:43:14.769937 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:43:14.769943 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:43:14.769950 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:43:14.769957 kernel: ACPI: Interpreter enabled Oct 2 19:43:14.769963 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:43:14.769970 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:43:14.769977 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:43:14.769984 kernel: printk: console [ttyAMA0] enabled Oct 2 19:43:14.769990 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:43:14.770133 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:43:14.770210 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:43:14.770286 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:43:14.770345 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:43:14.770408 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:43:14.770417 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:43:14.770423 kernel: PCI host bridge to bus 0000:00 Oct 2 19:43:14.770515 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:43:14.770572 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:43:14.770634 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:43:14.770689 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:43:14.770784 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:43:14.770866 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:43:14.770930 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:43:14.770992 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:43:14.771053 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:43:14.771125 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:43:14.771191 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:43:14.771255 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:43:14.771311 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:43:14.771365 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:43:14.771419 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:43:14.771428 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:43:14.771457 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:43:14.771464 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:43:14.771474 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:43:14.771480 kernel: iommu: Default domain type: Translated Oct 2 19:43:14.771487 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:43:14.771494 kernel: vgaarb: loaded Oct 2 19:43:14.771500 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:43:14.771507 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:43:14.771514 kernel: PTP clock support registered Oct 2 19:43:14.771523 kernel: Registered efivars operations Oct 2 19:43:14.771529 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:43:14.771536 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:43:14.771544 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:43:14.771550 kernel: pnp: PnP ACPI init Oct 2 19:43:14.771635 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:43:14.771645 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:43:14.771652 kernel: NET: Registered PF_INET protocol family Oct 2 19:43:14.771659 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:43:14.771666 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:43:14.771672 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:43:14.771681 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:43:14.771688 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:43:14.771694 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:43:14.771701 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:43:14.771707 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:43:14.771714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:43:14.771721 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:43:14.771728 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:43:14.771736 kernel: kvm [1]: HYP mode not available Oct 2 19:43:14.771743 kernel: Initialise system trusted keyrings Oct 2 19:43:14.771750 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:43:14.771756 kernel: Key type asymmetric registered Oct 2 19:43:14.771763 kernel: Asymmetric key parser 'x509' registered Oct 2 19:43:14.771770 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:43:14.771777 kernel: io scheduler mq-deadline registered Oct 2 19:43:14.771783 kernel: io scheduler kyber registered Oct 2 19:43:14.771790 kernel: io scheduler bfq registered Oct 2 19:43:14.771796 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:43:14.771804 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:43:14.771811 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:43:14.771872 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:43:14.771881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:43:14.771888 kernel: thunder_xcv, ver 1.0 Oct 2 19:43:14.771895 kernel: thunder_bgx, ver 1.0 Oct 2 19:43:14.771902 kernel: nicpf, ver 1.0 Oct 2 19:43:14.771908 kernel: nicvf, ver 1.0 Oct 2 19:43:14.771979 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:43:14.772040 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:43:14 UTC (1696275794) Oct 2 19:43:14.772049 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:43:14.772056 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:43:14.772063 kernel: Segment Routing with IPv6 Oct 2 19:43:14.772069 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:43:14.772076 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:43:14.772083 kernel: Key type dns_resolver registered Oct 2 19:43:14.772090 kernel: registered taskstats version 1 Oct 2 19:43:14.772098 kernel: Loading compiled-in X.509 certificates Oct 2 19:43:14.772106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:43:14.772112 kernel: Key type .fscrypt registered Oct 2 19:43:14.772119 kernel: Key type fscrypt-provisioning registered Oct 2 19:43:14.772126 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:43:14.772133 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:43:14.772140 kernel: ima: No architecture policies found Oct 2 19:43:14.772148 kernel: Freeing unused kernel memory: 34560K Oct 2 19:43:14.772155 kernel: Run /init as init process Oct 2 19:43:14.772163 kernel: with arguments: Oct 2 19:43:14.772170 kernel: /init Oct 2 19:43:14.772176 kernel: with environment: Oct 2 19:43:14.772183 kernel: HOME=/ Oct 2 19:43:14.772189 kernel: TERM=linux Oct 2 19:43:14.772196 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:43:14.772205 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:43:14.772215 systemd[1]: Detected virtualization kvm. Oct 2 19:43:14.772225 systemd[1]: Detected architecture arm64. Oct 2 19:43:14.772232 systemd[1]: Running in initrd. Oct 2 19:43:14.772239 systemd[1]: No hostname configured, using default hostname. Oct 2 19:43:14.772246 systemd[1]: Hostname set to . Oct 2 19:43:14.772254 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:43:14.772261 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:43:14.772269 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:43:14.772276 systemd[1]: Reached target cryptsetup.target. Oct 2 19:43:14.772285 systemd[1]: Reached target paths.target. Oct 2 19:43:14.772292 systemd[1]: Reached target slices.target. Oct 2 19:43:14.772299 systemd[1]: Reached target swap.target. Oct 2 19:43:14.772306 systemd[1]: Reached target timers.target. Oct 2 19:43:14.772313 systemd[1]: Listening on iscsid.socket. Oct 2 19:43:14.772321 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:43:14.772328 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:43:14.772337 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:43:14.772344 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:43:14.772352 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:43:14.772359 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:43:14.772366 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:43:14.772373 systemd[1]: Reached target sockets.target. Oct 2 19:43:14.772380 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:43:14.772387 systemd[1]: Finished network-cleanup.service. Oct 2 19:43:14.772395 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:43:14.772403 systemd[1]: Starting systemd-journald.service... Oct 2 19:43:14.772411 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:43:14.772418 systemd[1]: Starting systemd-resolved.service... Oct 2 19:43:14.772425 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:43:14.772432 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:43:14.772448 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:43:14.772475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:43:14.772482 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:43:14.772493 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:43:14.772503 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:43:14.772511 kernel: audit: type=1130 audit(1696275794.767:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.772523 systemd-journald[290]: Journal started Oct 2 19:43:14.772566 systemd-journald[290]: Runtime Journal (/run/log/journal/bc62862629114b7ba8ebcc0b86888ddf) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:43:14.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.760297 systemd-modules-load[291]: Inserted module 'overlay' Oct 2 19:43:14.773927 systemd[1]: Started systemd-journald.service. Oct 2 19:43:14.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.777467 kernel: audit: type=1130 audit(1696275794.774:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.777514 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:43:14.780166 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 2 19:43:14.780858 kernel: Bridge firewalling registered Oct 2 19:43:14.792047 systemd-resolved[292]: Positive Trust Anchors: Oct 2 19:43:14.798133 kernel: SCSI subsystem initialized Oct 2 19:43:14.798155 kernel: audit: type=1130 audit(1696275794.792:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.792061 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:43:14.792089 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:43:14.792199 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:43:14.810511 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:43:14.810533 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:43:14.810541 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:43:14.810550 kernel: audit: type=1130 audit(1696275794.807:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.794386 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:43:14.799847 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 2 19:43:14.800713 systemd[1]: Started systemd-resolved.service. Oct 2 19:43:14.808387 systemd[1]: Reached target nss-lookup.target. Oct 2 19:43:14.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.811330 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 2 19:43:14.816901 kernel: audit: type=1130 audit(1696275794.812:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.817003 dracut-cmdline[308]: dracut-dracut-053 Oct 2 19:43:14.812592 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:43:14.814147 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:43:14.819619 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:43:14.823296 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:43:14.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.826461 kernel: audit: type=1130 audit(1696275794.823:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.898463 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:43:14.906461 kernel: iscsi: registered transport (tcp) Oct 2 19:43:14.921465 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:43:14.921510 kernel: QLogic iSCSI HBA Driver Oct 2 19:43:14.975509 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:43:14.982256 kernel: audit: type=1130 audit(1696275794.976:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:14.977391 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:43:15.031463 kernel: raid6: neonx8 gen() 13751 MB/s Oct 2 19:43:15.048463 kernel: raid6: neonx8 xor() 10787 MB/s Oct 2 19:43:15.065476 kernel: raid6: neonx4 gen() 13555 MB/s Oct 2 19:43:15.082484 kernel: raid6: neonx4 xor() 11033 MB/s Oct 2 19:43:15.099478 kernel: raid6: neonx2 gen() 13101 MB/s Oct 2 19:43:15.116485 kernel: raid6: neonx2 xor() 10242 MB/s Oct 2 19:43:15.133456 kernel: raid6: neonx1 gen() 10479 MB/s Oct 2 19:43:15.150461 kernel: raid6: neonx1 xor() 8759 MB/s Oct 2 19:43:15.167469 kernel: raid6: int64x8 gen() 6268 MB/s Oct 2 19:43:15.184471 kernel: raid6: int64x8 xor() 3449 MB/s Oct 2 19:43:15.201489 kernel: raid6: int64x4 gen() 7230 MB/s Oct 2 19:43:15.218477 kernel: raid6: int64x4 xor() 3853 MB/s Oct 2 19:43:15.235465 kernel: raid6: int64x2 gen() 6084 MB/s Oct 2 19:43:15.252474 kernel: raid6: int64x2 xor() 3301 MB/s Oct 2 19:43:15.269467 kernel: raid6: int64x1 gen() 5033 MB/s Oct 2 19:43:15.286636 kernel: raid6: int64x1 xor() 2635 MB/s Oct 2 19:43:15.286682 kernel: raid6: using algorithm neonx8 gen() 13751 MB/s Oct 2 19:43:15.286692 kernel: raid6: .... xor() 10787 MB/s, rmw enabled Oct 2 19:43:15.286700 kernel: raid6: using neon recovery algorithm Oct 2 19:43:15.297502 kernel: xor: measuring software checksum speed Oct 2 19:43:15.297543 kernel: 8regs : 17319 MB/sec Oct 2 19:43:15.298455 kernel: 32regs : 20755 MB/sec Oct 2 19:43:15.299450 kernel: arm64_neon : 27675 MB/sec Oct 2 19:43:15.299466 kernel: xor: using function: arm64_neon (27675 MB/sec) Oct 2 19:43:15.354475 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:43:15.366797 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:43:15.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:15.369000 audit: BPF prog-id=7 op=LOAD Oct 2 19:43:15.369000 audit: BPF prog-id=8 op=LOAD Oct 2 19:43:15.370461 kernel: audit: type=1130 audit(1696275795.366:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:15.370485 kernel: audit: type=1334 audit(1696275795.369:10): prog-id=7 op=LOAD Oct 2 19:43:15.370594 systemd[1]: Starting systemd-udevd.service... Oct 2 19:43:15.382950 systemd-udevd[492]: Using default interface naming scheme 'v252'. Oct 2 19:43:15.386327 systemd[1]: Started systemd-udevd.service. Oct 2 19:43:15.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:15.388500 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:43:15.401845 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Oct 2 19:43:15.433801 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:43:15.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:15.435211 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:43:15.472629 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:43:15.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:15.507657 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:43:15.524460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:43:15.537080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:43:15.539366 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (545) Oct 2 19:43:15.538734 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:43:15.542569 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:43:15.547744 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:43:15.552930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:43:15.555263 systemd[1]: Starting disk-uuid.service... Oct 2 19:43:15.563452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:43:16.578197 disk-uuid[565]: The operation has completed successfully. Oct 2 19:43:16.579731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:43:16.602196 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:43:16.603144 systemd[1]: Finished disk-uuid.service. Oct 2 19:43:16.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.608101 systemd[1]: Starting verity-setup.service... Oct 2 19:43:16.627460 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:43:16.650916 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:43:16.652855 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:43:16.655729 systemd[1]: Finished verity-setup.service. Oct 2 19:43:16.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.708266 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:43:16.709284 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:43:16.708924 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:43:16.709633 systemd[1]: Starting ignition-setup.service... Oct 2 19:43:16.711675 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:43:16.720642 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:43:16.720682 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:43:16.720699 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:43:16.730609 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:43:16.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.738282 systemd[1]: Finished ignition-setup.service. Oct 2 19:43:16.739958 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:43:16.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.832000 audit: BPF prog-id=9 op=LOAD Oct 2 19:43:16.831040 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:43:16.833162 systemd[1]: Starting systemd-networkd.service... Oct 2 19:43:16.858057 ignition[647]: Ignition 2.14.0 Oct 2 19:43:16.858072 ignition[647]: Stage: fetch-offline Oct 2 19:43:16.858112 ignition[647]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:16.858121 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:16.858254 ignition[647]: parsed url from cmdline: "" Oct 2 19:43:16.858257 ignition[647]: no config URL provided Oct 2 19:43:16.858262 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:43:16.858269 ignition[647]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:43:16.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.867362 systemd-networkd[742]: lo: Link UP Oct 2 19:43:16.858288 ignition[647]: op(1): [started] loading QEMU firmware config module Oct 2 19:43:16.867366 systemd-networkd[742]: lo: Gained carrier Oct 2 19:43:16.858293 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:43:16.867766 systemd-networkd[742]: Enumeration completed Oct 2 19:43:16.867538 ignition[647]: op(1): [finished] loading QEMU firmware config module Oct 2 19:43:16.867876 systemd[1]: Started systemd-networkd.service. Oct 2 19:43:16.867960 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:43:16.869365 systemd-networkd[742]: eth0: Link UP Oct 2 19:43:16.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.869368 systemd-networkd[742]: eth0: Gained carrier Oct 2 19:43:16.873144 systemd[1]: Reached target network.target. Oct 2 19:43:16.875600 systemd[1]: Starting iscsiuio.service... Oct 2 19:43:16.887181 systemd[1]: Started iscsiuio.service. Oct 2 19:43:16.893142 systemd[1]: Starting iscsid.service... Oct 2 19:43:16.897103 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:43:16.897103 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:43:16.897103 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:43:16.897103 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:43:16.897103 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:43:16.897103 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:43:16.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.899531 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:43:16.901526 systemd[1]: Started iscsid.service. Oct 2 19:43:16.904016 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:43:16.914163 ignition[647]: parsing config with SHA512: cdc38ab2ab87960088f1d0f960a4e1da807fdcdb2d260a26c4f2173df703a96443cf4cbb2497dcd3ee52d8238313b017c63d72ce157afc4471c4d185990b1dc5 Oct 2 19:43:16.916844 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:43:16.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.918838 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:43:16.920504 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:43:16.922380 systemd[1]: Reached target remote-fs.target. Oct 2 19:43:16.925103 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:43:16.934352 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:43:16.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.939547 unknown[647]: fetched base config from "system" Oct 2 19:43:16.939561 unknown[647]: fetched user config from "qemu" Oct 2 19:43:16.940311 ignition[647]: fetch-offline: fetch-offline passed Oct 2 19:43:16.940412 ignition[647]: Ignition finished successfully Oct 2 19:43:16.942883 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:43:16.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.943600 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:43:16.944347 systemd[1]: Starting ignition-kargs.service... Oct 2 19:43:16.954985 ignition[764]: Ignition 2.14.0 Oct 2 19:43:16.954995 ignition[764]: Stage: kargs Oct 2 19:43:16.955093 ignition[764]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:16.955103 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:16.955888 ignition[764]: kargs: kargs passed Oct 2 19:43:16.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.957623 systemd[1]: Finished ignition-kargs.service. Oct 2 19:43:16.955931 ignition[764]: Ignition finished successfully Oct 2 19:43:16.959618 systemd[1]: Starting ignition-disks.service... Oct 2 19:43:16.968434 ignition[770]: Ignition 2.14.0 Oct 2 19:43:16.968474 ignition[770]: Stage: disks Oct 2 19:43:16.968585 ignition[770]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:16.968595 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:16.970247 systemd[1]: Finished ignition-disks.service. Oct 2 19:43:16.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.969406 ignition[770]: disks: disks passed Oct 2 19:43:16.971455 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:43:16.969469 ignition[770]: Ignition finished successfully Oct 2 19:43:16.972310 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:43:16.973225 systemd[1]: Reached target local-fs.target. Oct 2 19:43:16.974175 systemd[1]: Reached target sysinit.target. Oct 2 19:43:16.975068 systemd[1]: Reached target basic.target. Oct 2 19:43:16.976900 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:43:16.988759 systemd-fsck[778]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:43:16.991864 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:43:16.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:16.993202 systemd[1]: Mounting sysroot.mount... Oct 2 19:43:17.000232 systemd[1]: Mounted sysroot.mount. Oct 2 19:43:17.001128 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:43:17.000812 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:43:17.002886 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:43:17.003593 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:43:17.003631 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:43:17.003655 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:43:17.006201 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:43:17.007557 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:43:17.013088 initrd-setup-root[788]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:43:17.018125 initrd-setup-root[796]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:43:17.022727 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:43:17.027726 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:43:17.058617 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:43:17.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:17.059993 systemd[1]: Starting ignition-mount.service... Oct 2 19:43:17.061148 systemd[1]: Starting sysroot-boot.service... Oct 2 19:43:17.066364 bash[829]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:43:17.076114 ignition[831]: INFO : Ignition 2.14.0 Oct 2 19:43:17.076114 ignition[831]: INFO : Stage: mount Oct 2 19:43:17.077258 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:17.077258 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:17.077258 ignition[831]: INFO : mount: mount passed Oct 2 19:43:17.077258 ignition[831]: INFO : Ignition finished successfully Oct 2 19:43:17.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:17.079979 systemd[1]: Finished ignition-mount.service. Oct 2 19:43:17.082514 systemd[1]: Finished sysroot-boot.service. Oct 2 19:43:17.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:17.662239 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:43:17.671455 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (840) Oct 2 19:43:17.673883 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:43:17.673915 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:43:17.673925 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:43:17.679386 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:43:17.680797 systemd[1]: Starting ignition-files.service... Oct 2 19:43:17.696977 ignition[860]: INFO : Ignition 2.14.0 Oct 2 19:43:17.696977 ignition[860]: INFO : Stage: files Oct 2 19:43:17.698140 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:17.698140 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:17.698140 ignition[860]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:43:17.700426 ignition[860]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:43:17.700426 ignition[860]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:43:17.702726 ignition[860]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:43:17.703702 ignition[860]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:43:17.703702 ignition[860]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:43:17.703400 unknown[860]: wrote ssh authorized keys file for user: core Oct 2 19:43:17.706292 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:43:17.706292 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Oct 2 19:43:17.945772 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:43:18.327830 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Oct 2 19:43:18.327830 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:43:18.331381 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:43:18.331381 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:43:18.450554 systemd-networkd[742]: eth0: Gained IPv6LL Oct 2 19:43:18.514982 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:43:18.717919 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Oct 2 19:43:18.720297 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:43:18.720297 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:43:18.720297 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:43:18.765334 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:43:19.202668 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Oct 2 19:43:19.202668 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:43:19.205674 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:43:19.205674 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:43:19.338382 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:43:20.077211 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Oct 2 19:43:20.079771 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:43:20.079771 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:43:20.079771 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:43:20.079771 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:43:20.079771 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:43:20.079771 ignition[860]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:43:20.108109 ignition[860]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:43:20.108109 ignition[860]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:43:20.108109 ignition[860]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:43:20.108109 ignition[860]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:43:20.108109 ignition[860]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:43:20.131384 ignition[860]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:43:20.132555 ignition[860]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:43:20.132555 ignition[860]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:43:20.132555 ignition[860]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:43:20.132555 ignition[860]: INFO : files: files passed Oct 2 19:43:20.132555 ignition[860]: INFO : Ignition finished successfully Oct 2 19:43:20.138651 systemd[1]: Finished ignition-files.service. Oct 2 19:43:20.142473 kernel: kauditd_printk_skb: 22 callbacks suppressed Oct 2 19:43:20.142494 kernel: audit: type=1130 audit(1696275800.139:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.140515 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:43:20.143146 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:43:20.143841 systemd[1]: Starting ignition-quench.service... Oct 2 19:43:20.146996 initrd-setup-root-after-ignition[885]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:43:20.149478 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:43:20.149612 systemd[1]: Finished ignition-quench.service. Oct 2 19:43:20.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.156066 initrd-setup-root-after-ignition[888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:43:20.161290 kernel: audit: type=1130 audit(1696275800.150:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.161311 kernel: audit: type=1131 audit(1696275800.150:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.161321 kernel: audit: type=1130 audit(1696275800.158:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.152986 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:43:20.158660 systemd[1]: Reached target ignition-complete.target. Oct 2 19:43:20.169118 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:43:20.187280 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:43:20.187375 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:43:20.192277 kernel: audit: type=1130 audit(1696275800.188:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.192299 kernel: audit: type=1131 audit(1696275800.188:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.188762 systemd[1]: Reached target initrd-fs.target. Oct 2 19:43:20.192790 systemd[1]: Reached target initrd.target. Oct 2 19:43:20.193706 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:43:20.194449 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:43:20.206282 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:43:20.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.207722 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:43:20.209977 kernel: audit: type=1130 audit(1696275800.206:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.217106 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:43:20.217827 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:43:20.218855 systemd[1]: Stopped target timers.target. Oct 2 19:43:20.219922 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:43:20.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.220043 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:43:20.223964 kernel: audit: type=1131 audit(1696275800.220:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.220890 systemd[1]: Stopped target initrd.target. Oct 2 19:43:20.223641 systemd[1]: Stopped target basic.target. Oct 2 19:43:20.224513 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:43:20.225491 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:43:20.226380 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:43:20.227501 systemd[1]: Stopped target remote-fs.target. Oct 2 19:43:20.228411 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:43:20.229400 systemd[1]: Stopped target sysinit.target. Oct 2 19:43:20.230501 systemd[1]: Stopped target local-fs.target. Oct 2 19:43:20.231423 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:43:20.232493 systemd[1]: Stopped target swap.target. Oct 2 19:43:20.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.236486 kernel: audit: type=1131 audit(1696275800.233:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.233466 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:43:20.233595 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:43:20.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.234531 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:43:20.241186 kernel: audit: type=1131 audit(1696275800.237:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.237035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:43:20.237138 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:43:20.238252 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:43:20.238351 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:43:20.240898 systemd[1]: Stopped target paths.target. Oct 2 19:43:20.241688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:43:20.245489 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:43:20.246174 systemd[1]: Stopped target slices.target. Oct 2 19:43:20.247107 systemd[1]: Stopped target sockets.target. Oct 2 19:43:20.248003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:43:20.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.248108 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:43:20.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.249083 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:43:20.249172 systemd[1]: Stopped ignition-files.service. Oct 2 19:43:20.251019 systemd[1]: Stopping ignition-mount.service... Oct 2 19:43:20.252806 iscsid[749]: iscsid shutting down. Oct 2 19:43:20.254216 systemd[1]: Stopping iscsid.service... Oct 2 19:43:20.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.255615 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:43:20.260067 ignition[902]: INFO : Ignition 2.14.0 Oct 2 19:43:20.260067 ignition[902]: INFO : Stage: umount Oct 2 19:43:20.260067 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:20.260067 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:20.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.256217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:43:20.263671 ignition[902]: INFO : umount: umount passed Oct 2 19:43:20.263671 ignition[902]: INFO : Ignition finished successfully Oct 2 19:43:20.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.256341 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:43:20.257075 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:43:20.257160 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:43:20.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.262125 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:43:20.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.262225 systemd[1]: Stopped iscsid.service. Oct 2 19:43:20.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.264545 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:43:20.264774 systemd[1]: Stopped ignition-mount.service. Oct 2 19:43:20.266389 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:43:20.266481 systemd[1]: Closed iscsid.socket. Oct 2 19:43:20.266987 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:43:20.267026 systemd[1]: Stopped ignition-disks.service. Oct 2 19:43:20.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.268382 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:43:20.268462 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:43:20.269926 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:43:20.269970 systemd[1]: Stopped ignition-setup.service. Oct 2 19:43:20.271211 systemd[1]: Stopping iscsiuio.service... Oct 2 19:43:20.274972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:43:20.275410 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:43:20.275506 systemd[1]: Stopped iscsiuio.service. Oct 2 19:43:20.277744 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:43:20.277825 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:43:20.279124 systemd[1]: Stopped target network.target. Oct 2 19:43:20.280346 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:43:20.280379 systemd[1]: Closed iscsiuio.socket. Oct 2 19:43:20.281401 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:43:20.282656 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:43:20.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.288535 systemd-networkd[742]: eth0: DHCPv6 lease lost Oct 2 19:43:20.289736 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:43:20.289833 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:43:20.291102 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:43:20.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.291135 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:43:20.296000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:43:20.293225 systemd[1]: Stopping network-cleanup.service... Oct 2 19:43:20.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.293850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:43:20.293911 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:43:20.295299 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:43:20.295337 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:43:20.297429 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:43:20.297483 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:43:20.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.298596 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:43:20.303218 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:43:20.304537 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:43:20.304646 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:43:20.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.309332 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:43:20.309622 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:43:20.311352 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:43:20.311539 systemd[1]: Stopped network-cleanup.service. Oct 2 19:43:20.313000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:43:20.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.313923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:43:20.313963 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:43:20.316491 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:43:20.316531 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:43:20.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.317098 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:43:20.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.317138 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:43:20.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.319191 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:43:20.319234 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:43:20.320181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:43:20.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.320217 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:43:20.323071 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:43:20.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.324899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:43:20.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.324959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:43:20.327657 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:43:20.327704 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:43:20.328772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:43:20.328815 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:43:20.330981 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:43:20.331405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:43:20.331505 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:43:20.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.346304 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:43:20.346421 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:43:20.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.347658 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:43:20.348517 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:43:20.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.348564 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:43:20.350255 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:43:20.358491 systemd[1]: Switching root. Oct 2 19:43:20.378672 systemd-journald[290]: Journal stopped Oct 2 19:43:22.514926 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Oct 2 19:43:22.515038 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:43:22.515051 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:43:22.515062 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:43:22.515072 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:43:22.515085 kernel: SELinux: policy capability open_perms=1 Oct 2 19:43:22.515105 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:43:22.515115 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:43:22.515125 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:43:22.515135 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:43:22.515144 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:43:22.515154 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:43:22.515164 systemd[1]: Successfully loaded SELinux policy in 31.476ms. Oct 2 19:43:22.515197 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.207ms. Oct 2 19:43:22.515210 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:43:22.515221 systemd[1]: Detected virtualization kvm. Oct 2 19:43:22.515232 systemd[1]: Detected architecture arm64. Oct 2 19:43:22.515243 systemd[1]: Detected first boot. Oct 2 19:43:22.515255 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:43:22.515266 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:43:22.515277 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:43:22.515288 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:43:22.515300 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:43:22.515311 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:43:22.515322 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:43:22.515332 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:43:22.515343 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:43:22.515353 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:43:22.515364 systemd[1]: Created slice system-getty.slice. Oct 2 19:43:22.515374 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:43:22.515385 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:43:22.515395 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:43:22.515406 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:43:22.515417 systemd[1]: Created slice user.slice. Oct 2 19:43:22.515428 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:43:22.515467 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:43:22.515478 systemd[1]: Set up automount boot.automount. Oct 2 19:43:22.515488 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:43:22.515500 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:43:22.515510 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:43:22.515520 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:43:22.515544 systemd[1]: Reached target integritysetup.target. Oct 2 19:43:22.515571 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:43:22.515583 systemd[1]: Reached target remote-fs.target. Oct 2 19:43:22.515593 systemd[1]: Reached target slices.target. Oct 2 19:43:22.515603 systemd[1]: Reached target swap.target. Oct 2 19:43:22.515614 systemd[1]: Reached target torcx.target. Oct 2 19:43:22.515624 systemd[1]: Reached target veritysetup.target. Oct 2 19:43:22.515635 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:43:22.515645 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:43:22.515657 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:43:22.515668 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:43:22.515679 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:43:22.515689 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:43:22.515700 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:43:22.515710 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:43:22.515721 systemd[1]: Mounting media.mount... Oct 2 19:43:22.515731 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:43:22.515742 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:43:22.515752 systemd[1]: Mounting tmp.mount... Oct 2 19:43:22.515764 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:43:22.515776 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:43:22.515786 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:43:22.515796 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:43:22.515809 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:43:22.515820 systemd[1]: Starting modprobe@drm.service... Oct 2 19:43:22.515830 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:43:22.515841 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:43:22.515851 systemd[1]: Starting modprobe@loop.service... Oct 2 19:43:22.515864 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:43:22.515875 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:43:22.515885 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:43:22.515895 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:43:22.515906 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:43:22.515916 systemd[1]: Stopped systemd-journald.service. Oct 2 19:43:22.515926 systemd[1]: Starting systemd-journald.service... Oct 2 19:43:22.515936 kernel: fuse: init (API version 7.34) Oct 2 19:43:22.515946 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:43:22.515958 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:43:22.515969 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:43:22.515979 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:43:22.515989 kernel: loop: module loaded Oct 2 19:43:22.516000 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:43:22.516010 systemd[1]: Stopped verity-setup.service. Oct 2 19:43:22.516020 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:43:22.516034 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:43:22.516046 systemd[1]: Mounted media.mount. Oct 2 19:43:22.516058 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:43:22.516069 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:43:22.516079 systemd[1]: Mounted tmp.mount. Oct 2 19:43:22.516089 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:43:22.516099 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:43:22.516113 systemd-journald[994]: Journal started Oct 2 19:43:22.516162 systemd-journald[994]: Runtime Journal (/run/log/journal/bc62862629114b7ba8ebcc0b86888ddf) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:43:20.443000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:43:20.611000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:43:20.611000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:43:20.611000 audit: BPF prog-id=10 op=LOAD Oct 2 19:43:20.611000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:43:20.611000 audit: BPF prog-id=11 op=LOAD Oct 2 19:43:20.611000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:43:22.403000 audit: BPF prog-id=12 op=LOAD Oct 2 19:43:22.403000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:43:22.404000 audit: BPF prog-id=13 op=LOAD Oct 2 19:43:22.404000 audit: BPF prog-id=14 op=LOAD Oct 2 19:43:22.404000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:43:22.404000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:43:22.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.412000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:43:22.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.483000 audit: BPF prog-id=15 op=LOAD Oct 2 19:43:22.483000 audit: BPF prog-id=16 op=LOAD Oct 2 19:43:22.483000 audit: BPF prog-id=17 op=LOAD Oct 2 19:43:22.483000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:43:22.483000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:43:22.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.511000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:43:22.511000 audit[994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffa707c10 a2=4000 a3=1 items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:22.511000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:43:22.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.396957 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:43:22.517618 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:43:20.654137 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:43:22.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.396970 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:43:20.654766 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:43:22.404934 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:43:20.654786 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:43:20.654821 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:43:20.654832 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:43:20.654867 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:43:20.654879 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:43:20.655074 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:43:20.655112 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:43:20.655125 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:43:20.655526 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:43:20.655568 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:43:20.655587 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:43:22.518817 systemd[1]: Started systemd-journald.service. Oct 2 19:43:22.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:20.655601 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:43:20.655617 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:43:20.655630 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:43:22.131128 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:22.131410 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:22.131537 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:22.131713 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:22.131769 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:43:22.131828 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2023-10-02T19:43:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:43:22.519391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:43:22.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.520574 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:43:22.521543 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:43:22.521704 systemd[1]: Finished modprobe@drm.service. Oct 2 19:43:22.522533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:43:22.522687 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:43:22.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.523525 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:43:22.523688 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:43:22.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.524497 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:43:22.524998 systemd[1]: Finished modprobe@loop.service. Oct 2 19:43:22.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.526113 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:43:22.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.527069 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:43:22.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.528261 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:43:22.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.529468 systemd[1]: Reached target network-pre.target. Oct 2 19:43:22.531852 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:43:22.533602 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:43:22.534172 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:43:22.537966 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:43:22.540160 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:43:22.541194 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:43:22.542510 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:43:22.543534 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:43:22.545038 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:43:22.547407 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:43:22.549655 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:43:22.558569 systemd-journald[994]: Time spent on flushing to /var/log/journal/bc62862629114b7ba8ebcc0b86888ddf is 13.082ms for 979 entries. Oct 2 19:43:22.558569 systemd-journald[994]: System Journal (/var/log/journal/bc62862629114b7ba8ebcc0b86888ddf) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:43:22.588841 systemd-journald[994]: Received client request to flush runtime journal. Oct 2 19:43:22.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.559880 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:43:22.561938 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:43:22.589377 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:43:22.566430 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:43:22.570322 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:43:22.571692 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:43:22.573086 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:43:22.577369 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:43:22.590322 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:43:22.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.602733 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:43:22.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.604745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:43:22.635966 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:43:22.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.970010 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:43:22.972052 systemd[1]: Starting systemd-udevd.service... Oct 2 19:43:22.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:22.970000 audit: BPF prog-id=18 op=LOAD Oct 2 19:43:22.970000 audit: BPF prog-id=19 op=LOAD Oct 2 19:43:22.970000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:43:22.970000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:43:22.993139 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Oct 2 19:43:23.004817 systemd[1]: Started systemd-udevd.service. Oct 2 19:43:23.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.007000 audit: BPF prog-id=20 op=LOAD Oct 2 19:43:23.012349 systemd[1]: Starting systemd-networkd.service... Oct 2 19:43:23.015000 audit: BPF prog-id=21 op=LOAD Oct 2 19:43:23.015000 audit: BPF prog-id=22 op=LOAD Oct 2 19:43:23.015000 audit: BPF prog-id=23 op=LOAD Oct 2 19:43:23.016749 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:43:23.029255 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:43:23.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.052496 systemd[1]: Started systemd-userdbd.service. Oct 2 19:43:23.088647 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:43:23.100957 systemd-networkd[1050]: lo: Link UP Oct 2 19:43:23.100966 systemd-networkd[1050]: lo: Gained carrier Oct 2 19:43:23.101302 systemd-networkd[1050]: Enumeration completed Oct 2 19:43:23.101409 systemd-networkd[1050]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:43:23.101423 systemd[1]: Started systemd-networkd.service. Oct 2 19:43:23.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.103091 systemd-networkd[1050]: eth0: Link UP Oct 2 19:43:23.103101 systemd-networkd[1050]: eth0: Gained carrier Oct 2 19:43:23.118566 systemd-networkd[1050]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:43:23.133854 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:43:23.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.135828 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:43:23.165186 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:43:23.191401 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:43:23.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.192186 systemd[1]: Reached target cryptsetup.target. Oct 2 19:43:23.193965 systemd[1]: Starting lvm2-activation.service... Oct 2 19:43:23.198030 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:43:23.228373 systemd[1]: Finished lvm2-activation.service. Oct 2 19:43:23.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.229140 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:43:23.229765 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:43:23.229795 systemd[1]: Reached target local-fs.target. Oct 2 19:43:23.230319 systemd[1]: Reached target machines.target. Oct 2 19:43:23.232063 systemd[1]: Starting ldconfig.service... Oct 2 19:43:23.232951 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:43:23.233014 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:43:23.234201 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:43:23.235988 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:43:23.238193 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:43:23.239797 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:43:23.239861 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:43:23.241056 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:43:23.242187 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Oct 2 19:43:23.244317 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:43:23.253051 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:43:23.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.262529 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:43:23.306919 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:43:23.308053 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:43:23.316986 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:43:23.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.334217 systemd-fsck[1083]: fsck.fat 4.2 (2021-01-31) Oct 2 19:43:23.334217 systemd-fsck[1083]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:43:23.337268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:43:23.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.426499 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:43:23.429303 systemd[1]: Finished ldconfig.service. Oct 2 19:43:23.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.506749 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:43:23.508095 systemd[1]: Mounting boot.mount... Oct 2 19:43:23.515404 systemd[1]: Mounted boot.mount. Oct 2 19:43:23.521954 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:43:23.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.571605 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:43:23.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.573632 systemd[1]: Starting audit-rules.service... Oct 2 19:43:23.575278 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:43:23.577417 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:43:23.578000 audit: BPF prog-id=24 op=LOAD Oct 2 19:43:23.579809 systemd[1]: Starting systemd-resolved.service... Oct 2 19:43:23.582000 audit: BPF prog-id=25 op=LOAD Oct 2 19:43:23.584264 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:43:23.587938 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:43:23.589281 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:43:23.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.590283 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:43:23.592000 audit[1097]: SYSTEM_BOOT pid=1097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.595239 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:43:23.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.596561 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:43:23.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.598284 systemd[1]: Starting systemd-update-done.service... Oct 2 19:43:23.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:23.606798 systemd[1]: Finished systemd-update-done.service. Oct 2 19:43:23.621000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:43:23.621000 audit[1107]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff0ceb260 a2=420 a3=0 items=0 ppid=1086 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:23.621000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:43:23.622518 augenrules[1107]: No rules Oct 2 19:43:23.623287 systemd[1]: Finished audit-rules.service. Oct 2 19:43:23.633430 systemd-resolved[1090]: Positive Trust Anchors: Oct 2 19:43:23.633449 systemd-resolved[1090]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:43:23.633478 systemd-resolved[1090]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:43:23.638140 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:43:23.639203 systemd[1]: Reached target time-set.target. Oct 2 19:43:23.639209 systemd-timesyncd[1096]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:43:23.639258 systemd-timesyncd[1096]: Initial clock synchronization to Mon 2023-10-02 19:43:23.414319 UTC. Oct 2 19:43:23.646096 systemd-resolved[1090]: Defaulting to hostname 'linux'. Oct 2 19:43:23.647581 systemd[1]: Started systemd-resolved.service. Oct 2 19:43:23.648295 systemd[1]: Reached target network.target. Oct 2 19:43:23.648978 systemd[1]: Reached target nss-lookup.target. Oct 2 19:43:23.649615 systemd[1]: Reached target sysinit.target. Oct 2 19:43:23.650214 systemd[1]: Started motdgen.path. Oct 2 19:43:23.650764 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:43:23.651723 systemd[1]: Started logrotate.timer. Oct 2 19:43:23.652345 systemd[1]: Started mdadm.timer. Oct 2 19:43:23.653069 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:43:23.653676 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:43:23.653705 systemd[1]: Reached target paths.target. Oct 2 19:43:23.654198 systemd[1]: Reached target timers.target. Oct 2 19:43:23.655080 systemd[1]: Listening on dbus.socket. Oct 2 19:43:23.656647 systemd[1]: Starting docker.socket... Oct 2 19:43:23.659954 systemd[1]: Listening on sshd.socket. Oct 2 19:43:23.660602 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:43:23.661030 systemd[1]: Listening on docker.socket. Oct 2 19:43:23.661683 systemd[1]: Reached target sockets.target. Oct 2 19:43:23.662229 systemd[1]: Reached target basic.target. Oct 2 19:43:23.662860 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:43:23.662890 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:43:23.663928 systemd[1]: Starting containerd.service... Oct 2 19:43:23.665500 systemd[1]: Starting dbus.service... Oct 2 19:43:23.667076 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:43:23.668797 systemd[1]: Starting extend-filesystems.service... Oct 2 19:43:23.669399 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:43:23.670821 systemd[1]: Starting motdgen.service... Oct 2 19:43:23.673420 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:43:23.676976 systemd[1]: Starting prepare-critools.service... Oct 2 19:43:23.678655 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:43:23.680561 systemd[1]: Starting sshd-keygen.service... Oct 2 19:43:23.680905 jq[1117]: false Oct 2 19:43:23.684299 systemd[1]: Starting systemd-logind.service... Oct 2 19:43:23.685029 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:43:23.685093 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:43:23.685576 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:43:23.686405 systemd[1]: Starting update-engine.service... Oct 2 19:43:23.688878 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:43:23.691910 jq[1136]: true Oct 2 19:43:23.697108 extend-filesystems[1118]: Found vda Oct 2 19:43:23.697108 extend-filesystems[1118]: Found vda1 Oct 2 19:43:23.692676 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:43:23.699424 extend-filesystems[1118]: Found vda2 Oct 2 19:43:23.699424 extend-filesystems[1118]: Found vda3 Oct 2 19:43:23.699424 extend-filesystems[1118]: Found usr Oct 2 19:43:23.699424 extend-filesystems[1118]: Found vda4 Oct 2 19:43:23.699424 extend-filesystems[1118]: Found vda6 Oct 2 19:43:23.699424 extend-filesystems[1118]: Found vda7 Oct 2 19:43:23.699424 extend-filesystems[1118]: Found vda9 Oct 2 19:43:23.692839 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:43:23.704906 extend-filesystems[1118]: Checking size of /dev/vda9 Oct 2 19:43:23.694625 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:43:23.706949 tar[1140]: crictl Oct 2 19:43:23.694783 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:43:23.718294 tar[1139]: ./ Oct 2 19:43:23.718294 tar[1139]: ./loopback Oct 2 19:43:23.719845 jq[1141]: true Oct 2 19:43:23.708685 dbus-daemon[1116]: [system] SELinux support is enabled Oct 2 19:43:23.708835 systemd[1]: Started dbus.service. Oct 2 19:43:23.711142 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:43:23.711182 systemd[1]: Reached target system-config.target. Oct 2 19:43:23.711892 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:43:23.711906 systemd[1]: Reached target user-config.target. Oct 2 19:43:23.731186 extend-filesystems[1118]: Old size kept for /dev/vda9 Oct 2 19:43:23.726963 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:43:23.727146 systemd[1]: Finished extend-filesystems.service. Oct 2 19:43:23.733782 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:43:23.733936 systemd[1]: Finished motdgen.service. Oct 2 19:43:23.776493 tar[1139]: ./bandwidth Oct 2 19:43:23.785946 bash[1168]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:43:23.786306 systemd-logind[1132]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:43:23.787067 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:43:23.787264 systemd-logind[1132]: New seat seat0. Oct 2 19:43:23.794288 systemd[1]: Started systemd-logind.service. Oct 2 19:43:23.799042 update_engine[1134]: I1002 19:43:23.796996 1134 main.cc:92] Flatcar Update Engine starting Oct 2 19:43:23.806521 systemd[1]: Started update-engine.service. Oct 2 19:43:23.810033 update_engine[1134]: I1002 19:43:23.806627 1134 update_check_scheduler.cc:74] Next update check in 4m46s Oct 2 19:43:23.809030 systemd[1]: Started locksmithd.service. Oct 2 19:43:23.818518 tar[1139]: ./ptp Oct 2 19:43:23.844396 env[1142]: time="2023-10-02T19:43:23.844344920Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:43:23.856760 tar[1139]: ./vlan Oct 2 19:43:23.884148 env[1142]: time="2023-10-02T19:43:23.884091480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:43:23.884390 env[1142]: time="2023-10-02T19:43:23.884369920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:23.891050 tar[1139]: ./host-device Oct 2 19:43:23.892135 env[1142]: time="2023-10-02T19:43:23.892097200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892135 env[1142]: time="2023-10-02T19:43:23.892132080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892370 env[1142]: time="2023-10-02T19:43:23.892347720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892401 env[1142]: time="2023-10-02T19:43:23.892371200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892401 env[1142]: time="2023-10-02T19:43:23.892387320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:43:23.892401 env[1142]: time="2023-10-02T19:43:23.892397800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892514 env[1142]: time="2023-10-02T19:43:23.892495320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892881 env[1142]: time="2023-10-02T19:43:23.892861760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:23.892997 env[1142]: time="2023-10-02T19:43:23.892977600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:43:23.893026 env[1142]: time="2023-10-02T19:43:23.892996280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:43:23.893068 env[1142]: time="2023-10-02T19:43:23.893051040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:43:23.893103 env[1142]: time="2023-10-02T19:43:23.893067280Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:43:23.900858 env[1142]: time="2023-10-02T19:43:23.900819880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:43:23.900899 env[1142]: time="2023-10-02T19:43:23.900863640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:43:23.900899 env[1142]: time="2023-10-02T19:43:23.900877400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:43:23.900953 env[1142]: time="2023-10-02T19:43:23.900911440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.900953 env[1142]: time="2023-10-02T19:43:23.900927800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.900953 env[1142]: time="2023-10-02T19:43:23.900943040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901015 env[1142]: time="2023-10-02T19:43:23.900955760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901299 env[1142]: time="2023-10-02T19:43:23.901278760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901327 env[1142]: time="2023-10-02T19:43:23.901307000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901327 env[1142]: time="2023-10-02T19:43:23.901322240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901376 env[1142]: time="2023-10-02T19:43:23.901336720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901376 env[1142]: time="2023-10-02T19:43:23.901350240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:43:23.901520 env[1142]: time="2023-10-02T19:43:23.901500680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:43:23.901612 env[1142]: time="2023-10-02T19:43:23.901595480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:43:23.901843 env[1142]: time="2023-10-02T19:43:23.901824880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:43:23.901881 env[1142]: time="2023-10-02T19:43:23.901854320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.901881 env[1142]: time="2023-10-02T19:43:23.901868680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:43:23.902092 env[1142]: time="2023-10-02T19:43:23.902077360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902122 env[1142]: time="2023-10-02T19:43:23.902093680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902122 env[1142]: time="2023-10-02T19:43:23.902107800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902122 env[1142]: time="2023-10-02T19:43:23.902119520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902177 env[1142]: time="2023-10-02T19:43:23.902132680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902177 env[1142]: time="2023-10-02T19:43:23.902145560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902177 env[1142]: time="2023-10-02T19:43:23.902156800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902177 env[1142]: time="2023-10-02T19:43:23.902168520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902257 env[1142]: time="2023-10-02T19:43:23.902181440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:43:23.902323 env[1142]: time="2023-10-02T19:43:23.902304760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902355 env[1142]: time="2023-10-02T19:43:23.902333400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902355 env[1142]: time="2023-10-02T19:43:23.902347480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902396 env[1142]: time="2023-10-02T19:43:23.902360080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:43:23.902396 env[1142]: time="2023-10-02T19:43:23.902375240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:43:23.902396 env[1142]: time="2023-10-02T19:43:23.902392520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:43:23.902481 env[1142]: time="2023-10-02T19:43:23.902417400Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:43:23.902507 env[1142]: time="2023-10-02T19:43:23.902481720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:43:23.902750 env[1142]: time="2023-10-02T19:43:23.902698200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.902756560Z" level=info msg="Connect containerd service" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.902790760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.903782240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904292160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904356360Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904400840Z" level=info msg="containerd successfully booted in 0.060716s" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904464840Z" level=info msg="Start subscribing containerd event" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904525760Z" level=info msg="Start recovering state" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904593520Z" level=info msg="Start event monitor" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904626320Z" level=info msg="Start snapshots syncer" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904638480Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:43:23.909309 env[1142]: time="2023-10-02T19:43:23.904646920Z" level=info msg="Start streaming server" Oct 2 19:43:23.908162 systemd[1]: Started containerd.service. Oct 2 19:43:23.929598 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:43:23.929865 tar[1139]: ./tuning Oct 2 19:43:23.959106 tar[1139]: ./vrf Oct 2 19:43:23.989595 tar[1139]: ./sbr Oct 2 19:43:24.018988 tar[1139]: ./tap Oct 2 19:43:24.052840 tar[1139]: ./dhcp Oct 2 19:43:24.118569 systemd[1]: Finished prepare-critools.service. Oct 2 19:43:24.133447 tar[1139]: ./static Oct 2 19:43:24.153899 tar[1139]: ./firewall Oct 2 19:43:24.184762 tar[1139]: ./macvlan Oct 2 19:43:24.212800 tar[1139]: ./dummy Oct 2 19:43:24.240466 tar[1139]: ./bridge Oct 2 19:43:24.270527 tar[1139]: ./ipvlan Oct 2 19:43:24.298094 tar[1139]: ./portmap Oct 2 19:43:24.324360 tar[1139]: ./host-local Oct 2 19:43:24.358860 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:43:24.571658 systemd[1]: Created slice system-sshd.slice. Oct 2 19:43:24.722679 systemd-networkd[1050]: eth0: Gained IPv6LL Oct 2 19:43:25.847670 sshd_keygen[1138]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:43:25.865277 systemd[1]: Finished sshd-keygen.service. Oct 2 19:43:25.867379 systemd[1]: Starting issuegen.service... Oct 2 19:43:25.868938 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:39472.service. Oct 2 19:43:25.872820 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:43:25.872984 systemd[1]: Finished issuegen.service. Oct 2 19:43:25.874892 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:43:25.881861 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:43:25.883898 systemd[1]: Started getty@tty1.service. Oct 2 19:43:25.885750 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:43:25.886547 systemd[1]: Reached target getty.target. Oct 2 19:43:25.887142 systemd[1]: Reached target multi-user.target. Oct 2 19:43:25.888869 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:43:25.895848 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:43:25.895994 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:43:25.896833 systemd[1]: Startup finished in 651ms (kernel) + 5.817s (initrd) + 5.488s (userspace) = 11.957s. Oct 2 19:43:25.926764 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 39472 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:25.929120 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:25.937010 systemd[1]: Created slice user-500.slice. Oct 2 19:43:25.938143 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:43:25.939938 systemd-logind[1132]: New session 1 of user core. Oct 2 19:43:25.947985 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:43:25.949303 systemd[1]: Starting user@500.service... Oct 2 19:43:25.952225 (systemd)[1202]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.015755 systemd[1202]: Queued start job for default target default.target. Oct 2 19:43:26.016235 systemd[1202]: Reached target paths.target. Oct 2 19:43:26.016255 systemd[1202]: Reached target sockets.target. Oct 2 19:43:26.016266 systemd[1202]: Reached target timers.target. Oct 2 19:43:26.016276 systemd[1202]: Reached target basic.target. Oct 2 19:43:26.016326 systemd[1202]: Reached target default.target. Oct 2 19:43:26.016348 systemd[1202]: Startup finished in 58ms. Oct 2 19:43:26.016407 systemd[1]: Started user@500.service. Oct 2 19:43:26.017374 systemd[1]: Started session-1.scope. Oct 2 19:43:26.068042 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:39478.service. Oct 2 19:43:26.120721 sshd[1211]: Accepted publickey for core from 10.0.0.1 port 39478 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:26.121942 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.125385 systemd-logind[1132]: New session 2 of user core. Oct 2 19:43:26.126199 systemd[1]: Started session-2.scope. Oct 2 19:43:26.182932 sshd[1211]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:26.187077 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:39490.service. Oct 2 19:43:26.187561 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:39478.service: Deactivated successfully. Oct 2 19:43:26.188357 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:43:26.188880 systemd-logind[1132]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:43:26.189886 systemd-logind[1132]: Removed session 2. Oct 2 19:43:26.231364 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 39490 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:26.232746 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.235997 systemd-logind[1132]: New session 3 of user core. Oct 2 19:43:26.236824 systemd[1]: Started session-3.scope. Oct 2 19:43:26.285394 sshd[1216]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:26.288198 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:39490.service: Deactivated successfully. Oct 2 19:43:26.288806 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:43:26.289279 systemd-logind[1132]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:43:26.290353 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:39498.service. Oct 2 19:43:26.291076 systemd-logind[1132]: Removed session 3. Oct 2 19:43:26.332685 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 39498 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:26.333943 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.337873 systemd[1]: Started session-4.scope. Oct 2 19:43:26.338403 systemd-logind[1132]: New session 4 of user core. Oct 2 19:43:26.392657 sshd[1223]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:26.395375 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:39498.service: Deactivated successfully. Oct 2 19:43:26.395924 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:43:26.396383 systemd-logind[1132]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:43:26.397315 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:39508.service. Oct 2 19:43:26.397825 systemd-logind[1132]: Removed session 4. Oct 2 19:43:26.438713 sshd[1229]: Accepted publickey for core from 10.0.0.1 port 39508 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:26.439923 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.443935 systemd[1]: Started session-5.scope. Oct 2 19:43:26.444490 systemd-logind[1132]: New session 5 of user core. Oct 2 19:43:26.504241 sudo[1232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:43:26.504472 sudo[1232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:26.520630 dbus-daemon[1116]: avc: received setenforce notice (enforcing=1) Oct 2 19:43:26.522363 sudo[1232]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:26.524298 sshd[1229]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:26.527226 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:39508.service: Deactivated successfully. Oct 2 19:43:26.527876 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:43:26.528427 systemd-logind[1132]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:43:26.529605 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:39518.service. Oct 2 19:43:26.530203 systemd-logind[1132]: Removed session 5. Oct 2 19:43:26.573660 sshd[1236]: Accepted publickey for core from 10.0.0.1 port 39518 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:26.575276 sshd[1236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.578451 systemd-logind[1132]: New session 6 of user core. Oct 2 19:43:26.579269 systemd[1]: Started session-6.scope. Oct 2 19:43:26.631011 sudo[1240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:43:26.631211 sudo[1240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:26.634067 sudo[1240]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:26.638651 sudo[1239]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:43:26.638835 sudo[1239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:26.647726 systemd[1]: Stopping audit-rules.service... Oct 2 19:43:26.647000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:43:26.649851 kernel: kauditd_printk_skb: 117 callbacks suppressed Oct 2 19:43:26.649907 kernel: audit: type=1305 audit(1696275806.647:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:43:26.650175 auditctl[1243]: No rules Oct 2 19:43:26.650387 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:43:26.650562 systemd[1]: Stopped audit-rules.service. Oct 2 19:43:26.647000 audit[1243]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdf9f81f0 a2=420 a3=0 items=0 ppid=1 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:26.652013 systemd[1]: Starting audit-rules.service... Oct 2 19:43:26.653338 kernel: audit: type=1300 audit(1696275806.647:156): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdf9f81f0 a2=420 a3=0 items=0 ppid=1 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:26.653386 kernel: audit: type=1327 audit(1696275806.647:156): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:43:26.647000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:43:26.654054 kernel: audit: type=1131 audit(1696275806.649:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.670056 augenrules[1260]: No rules Oct 2 19:43:26.670717 systemd[1]: Finished audit-rules.service. Oct 2 19:43:26.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.671000 audit[1239]: USER_END pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.672077 sudo[1239]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:26.674988 kernel: audit: type=1130 audit(1696275806.670:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.675061 kernel: audit: type=1106 audit(1696275806.671:159): pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.675081 kernel: audit: type=1104 audit(1696275806.671:160): pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.671000 audit[1239]: CRED_DISP pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.675616 sshd[1236]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:26.677176 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:39524.service. Oct 2 19:43:26.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:39524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.679236 kernel: audit: type=1130 audit(1696275806.676:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:39524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.679309 kernel: audit: type=1106 audit(1696275806.678:162): pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.678000 audit[1236]: USER_END pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.680135 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:39518.service: Deactivated successfully. Oct 2 19:43:26.680772 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:43:26.681274 systemd-logind[1132]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:43:26.682130 systemd-logind[1132]: Removed session 6. Oct 2 19:43:26.678000 audit[1236]: CRED_DISP pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.684530 kernel: audit: type=1104 audit(1696275806.678:163): pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:39518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.718000 audit[1265]: USER_ACCT pid=1265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.719489 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 39524 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:26.719000 audit[1265]: CRED_ACQ pid=1265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.719000 audit[1265]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcbd67dc0 a2=3 a3=1 items=0 ppid=1 pid=1265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:26.719000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:43:26.720681 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:26.723941 systemd-logind[1132]: New session 7 of user core. Oct 2 19:43:26.724816 systemd[1]: Started session-7.scope. Oct 2 19:43:26.727000 audit[1265]: USER_START pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.729000 audit[1268]: CRED_ACQ pid=1268 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:26.777000 audit[1269]: USER_ACCT pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.779174 sudo[1269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:43:26.777000 audit[1269]: CRED_REFR pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:26.779386 sudo[1269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:26.779000 audit[1269]: USER_START pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:27.301373 systemd[1]: Reloading. Oct 2 19:43:27.352040 /usr/lib/systemd/system-generators/torcx-generator[1299]: time="2023-10-02T19:43:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:43:27.352069 /usr/lib/systemd/system-generators/torcx-generator[1299]: time="2023-10-02T19:43:27Z" level=info msg="torcx already run" Oct 2 19:43:27.420733 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:43:27.420753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:43:27.437529 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.480000 audit: BPF prog-id=31 op=LOAD Oct 2 19:43:27.480000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit: BPF prog-id=32 op=LOAD Oct 2 19:43:27.482000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit: BPF prog-id=33 op=LOAD Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit: BPF prog-id=34 op=LOAD Oct 2 19:43:27.482000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:43:27.482000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.482000 audit: BPF prog-id=35 op=LOAD Oct 2 19:43:27.483000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit: BPF prog-id=36 op=LOAD Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit: BPF prog-id=37 op=LOAD Oct 2 19:43:27.483000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:43:27.483000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit: BPF prog-id=38 op=LOAD Oct 2 19:43:27.483000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.483000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit: BPF prog-id=39 op=LOAD Oct 2 19:43:27.484000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit: BPF prog-id=40 op=LOAD Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit: BPF prog-id=41 op=LOAD Oct 2 19:43:27.484000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:43:27.484000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.484000 audit: BPF prog-id=42 op=LOAD Oct 2 19:43:27.484000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit: BPF prog-id=43 op=LOAD Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.485000 audit: BPF prog-id=44 op=LOAD Oct 2 19:43:27.485000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:43:27.485000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:27.486000 audit: BPF prog-id=45 op=LOAD Oct 2 19:43:27.486000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:43:27.495097 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:43:27.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:27.797612 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:43:27.798460 systemd[1]: Reached target network-online.target. Oct 2 19:43:27.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:27.804937 systemd[1]: Started kubelet.service. Oct 2 19:43:27.817885 systemd[1]: Starting coreos-metadata.service... Oct 2 19:43:27.826135 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:43:27.826299 systemd[1]: Finished coreos-metadata.service. Oct 2 19:43:27.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:27.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:27.925175 kubelet[1337]: E1002 19:43:27.925121 1337 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:43:27.928049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:43:27.928179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:43:27.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:43:28.128334 systemd[1]: Stopped kubelet.service. Oct 2 19:43:28.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:28.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:28.144378 systemd[1]: Reloading. Oct 2 19:43:28.210320 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2023-10-02T19:43:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:43:28.210348 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2023-10-02T19:43:28Z" level=info msg="torcx already run" Oct 2 19:43:28.268860 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:43:28.268877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:43:28.285862 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:43:28.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.329000 audit: BPF prog-id=46 op=LOAD Oct 2 19:43:28.329000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit: BPF prog-id=47 op=LOAD Oct 2 19:43:28.330000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit: BPF prog-id=48 op=LOAD Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.330000 audit: BPF prog-id=49 op=LOAD Oct 2 19:43:28.330000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:43:28.330000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit: BPF prog-id=50 op=LOAD Oct 2 19:43:28.331000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit: BPF prog-id=51 op=LOAD Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.331000 audit: BPF prog-id=52 op=LOAD Oct 2 19:43:28.331000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:43:28.331000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit: BPF prog-id=53 op=LOAD Oct 2 19:43:28.332000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit: BPF prog-id=54 op=LOAD Oct 2 19:43:28.332000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit: BPF prog-id=55 op=LOAD Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit: BPF prog-id=56 op=LOAD Oct 2 19:43:28.332000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:43:28.332000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.332000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.333000 audit: BPF prog-id=57 op=LOAD Oct 2 19:43:28.333000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit: BPF prog-id=58 op=LOAD Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.334000 audit: BPF prog-id=59 op=LOAD Oct 2 19:43:28.334000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:43:28.334000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:28.335000 audit: BPF prog-id=60 op=LOAD Oct 2 19:43:28.335000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:43:28.348447 systemd[1]: Started kubelet.service. Oct 2 19:43:28.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:28.415798 kubelet[1443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:43:28.415798 kubelet[1443]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:43:28.415798 kubelet[1443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:43:28.419582 kubelet[1443]: I1002 19:43:28.419522 1443 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:43:29.291891 kubelet[1443]: I1002 19:43:29.291849 1443 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:43:29.291891 kubelet[1443]: I1002 19:43:29.291883 1443 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:43:29.292118 kubelet[1443]: I1002 19:43:29.292084 1443 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:43:29.296074 kubelet[1443]: I1002 19:43:29.296043 1443 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:43:29.302222 kubelet[1443]: W1002 19:43:29.302197 1443 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:43:29.303013 kubelet[1443]: I1002 19:43:29.302988 1443 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:43:29.303337 kubelet[1443]: I1002 19:43:29.303320 1443 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:43:29.303573 kubelet[1443]: I1002 19:43:29.303550 1443 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:43:29.303722 kubelet[1443]: I1002 19:43:29.303709 1443 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:43:29.303779 kubelet[1443]: I1002 19:43:29.303770 1443 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:43:29.303929 kubelet[1443]: I1002 19:43:29.303915 1443 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:43:29.304238 kubelet[1443]: I1002 19:43:29.304222 1443 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:43:29.304323 kubelet[1443]: I1002 19:43:29.304313 1443 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:43:29.304392 kubelet[1443]: I1002 19:43:29.304382 1443 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:43:29.304475 kubelet[1443]: E1002 19:43:29.304446 1443 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.304524 kubelet[1443]: E1002 19:43:29.304498 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.304571 kubelet[1443]: I1002 19:43:29.304559 1443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:43:29.305664 kubelet[1443]: I1002 19:43:29.305646 1443 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:43:29.306106 kubelet[1443]: W1002 19:43:29.306091 1443 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:43:29.307013 kubelet[1443]: I1002 19:43:29.306981 1443 server.go:1232] "Started kubelet" Oct 2 19:43:29.307141 kubelet[1443]: I1002 19:43:29.307120 1443 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:43:29.307335 kubelet[1443]: I1002 19:43:29.307315 1443 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:43:29.307649 kubelet[1443]: I1002 19:43:29.307626 1443 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:43:29.307877 kubelet[1443]: I1002 19:43:29.307845 1443 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:43:29.306000 audit[1443]: AVC avc: denied { mac_admin } for pid=1443 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:29.306000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:43:29.306000 audit[1443]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000881a10 a1=4000079f50 a2=40008819e0 a3=25 items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.306000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:43:29.308336 kubelet[1443]: E1002 19:43:29.308312 1443 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:43:29.308336 kubelet[1443]: E1002 19:43:29.308333 1443 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:43:29.308409 kubelet[1443]: I1002 19:43:29.308356 1443 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:43:29.306000 audit[1443]: AVC avc: denied { mac_admin } for pid=1443 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:29.306000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:43:29.306000 audit[1443]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bcc100 a1=4000079f68 a2=4000881aa0 a3=25 items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.306000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:43:29.308604 kubelet[1443]: I1002 19:43:29.308444 1443 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:43:29.308604 kubelet[1443]: I1002 19:43:29.308519 1443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:43:29.309619 kubelet[1443]: E1002 19:43:29.309592 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:29.309682 kubelet[1443]: I1002 19:43:29.309639 1443 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:43:29.309754 kubelet[1443]: I1002 19:43:29.309739 1443 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:43:29.309841 kubelet[1443]: I1002 19:43:29.309810 1443 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:43:29.327631 kubelet[1443]: E1002 19:43:29.327522 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc7ed3df0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 306926861, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 306926861, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.327842 kubelet[1443]: W1002 19:43:29.327822 1443 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:29.327900 kubelet[1443]: E1002 19:43:29.327849 1443 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:29.327926 kubelet[1443]: E1002 19:43:29.327907 1443 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:43:29.329202 kubelet[1443]: W1002 19:43:29.329169 1443 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:29.329736 kubelet[1443]: E1002 19:43:29.329714 1443 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:29.329808 kubelet[1443]: W1002 19:43:29.329792 1443 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:29.329842 kubelet[1443]: E1002 19:43:29.329810 1443 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:29.330118 kubelet[1443]: E1002 19:43:29.330030 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc7ee9353c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 308325180, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 308325180, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.332294 kubelet[1443]: I1002 19:43:29.332275 1443 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:43:29.332428 kubelet[1443]: I1002 19:43:29.332405 1443 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:43:29.332608 kubelet[1443]: E1002 19:43:29.332423 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804a949b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331483803, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331483803, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.332790 kubelet[1443]: I1002 19:43:29.332757 1443 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:43:29.335271 kubelet[1443]: E1002 19:43:29.335187 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804aa6ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331488495, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331488495, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.336086 kubelet[1443]: I1002 19:43:29.336059 1443 policy_none.go:49] "None policy: Start" Oct 2 19:43:29.336710 kubelet[1443]: E1002 19:43:29.336640 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804abf92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331494802, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331494802, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.336992 kubelet[1443]: I1002 19:43:29.336972 1443 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:43:29.337035 kubelet[1443]: I1002 19:43:29.336996 1443 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:43:29.336000 audit[1457]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.336000 audit[1457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdbbe7400 a2=0 a3=1 items=0 ppid=1443 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.336000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:43:29.339000 audit[1463]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.339000 audit[1463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc29f21c0 a2=0 a3=1 items=0 ppid=1443 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:43:29.342640 systemd[1]: Created slice kubepods.slice. Oct 2 19:43:29.346597 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:43:29.348742 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:43:29.357171 kubelet[1443]: I1002 19:43:29.357128 1443 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:43:29.355000 audit[1443]: AVC avc: denied { mac_admin } for pid=1443 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:29.355000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:43:29.355000 audit[1443]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000077b90 a1=4000d2d038 a2=4000077b30 a3=25 items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.355000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:43:29.357418 kubelet[1443]: I1002 19:43:29.357205 1443 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:43:29.357418 kubelet[1443]: I1002 19:43:29.357401 1443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:43:29.358022 kubelet[1443]: E1002 19:43:29.357988 1443 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.13\" not found" Oct 2 19:43:29.360343 kubelet[1443]: E1002 19:43:29.360224 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc81e3fc5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 358314586, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 358314586, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.341000 audit[1465]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.341000 audit[1465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe51c33d0 a2=0 a3=1 items=0 ppid=1443 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:43:29.363000 audit[1470]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.363000 audit[1470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd8d766e0 a2=0 a3=1 items=0 ppid=1443 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:43:29.404000 audit[1475]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.404000 audit[1475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcb76b1c0 a2=0 a3=1 items=0 ppid=1443 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:43:29.404888 kubelet[1443]: I1002 19:43:29.404862 1443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:43:29.405000 audit[1477]: NETFILTER_CFG table=mangle:7 family=2 entries=1 op=nft_register_chain pid=1477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.405000 audit[1477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe2599cc0 a2=0 a3=1 items=0 ppid=1443 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:43:29.405000 audit[1476]: NETFILTER_CFG table=mangle:8 family=10 entries=2 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:29.405000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe23c7a80 a2=0 a3=1 items=0 ppid=1443 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:43:29.406727 kubelet[1443]: I1002 19:43:29.406703 1443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:43:29.406784 kubelet[1443]: I1002 19:43:29.406749 1443 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:43:29.406784 kubelet[1443]: I1002 19:43:29.406770 1443 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:43:29.407028 kubelet[1443]: E1002 19:43:29.406822 1443 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:43:29.406000 audit[1478]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.406000 audit[1478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff0a15cb0 a2=0 a3=1 items=0 ppid=1443 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:43:29.407000 audit[1479]: NETFILTER_CFG table=mangle:10 family=10 entries=1 op=nft_register_chain pid=1479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:29.407000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe84ebe90 a2=0 a3=1 items=0 ppid=1443 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:43:29.408390 kubelet[1443]: W1002 19:43:29.408366 1443 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:29.408447 kubelet[1443]: E1002 19:43:29.408395 1443 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:29.408000 audit[1480]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:29.408000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff6fa35b0 a2=0 a3=1 items=0 ppid=1443 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:43:29.408000 audit[1481]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=1481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:29.408000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc9355c80 a2=0 a3=1 items=0 ppid=1443 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.408000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:43:29.409000 audit[1482]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:29.409000 audit[1482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc0ccae30 a2=0 a3=1 items=0 ppid=1443 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:29.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:43:29.410845 kubelet[1443]: I1002 19:43:29.410811 1443 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:43:29.411807 kubelet[1443]: E1002 19:43:29.411777 1443 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:43:29.412052 kubelet[1443]: E1002 19:43:29.411975 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804a949b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331483803, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 410779721, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804a949b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.412841 kubelet[1443]: E1002 19:43:29.412777 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804aa6ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331488495, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 410785438, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804aa6ef" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.413593 kubelet[1443]: E1002 19:43:29.413534 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804abf92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331494802, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 410788158, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804abf92" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.529656 kubelet[1443]: E1002 19:43:29.529619 1443 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:43:29.612639 kubelet[1443]: I1002 19:43:29.612542 1443 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:43:29.614321 kubelet[1443]: E1002 19:43:29.614268 1443 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:43:29.614409 kubelet[1443]: E1002 19:43:29.614289 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804a949b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331483803, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 612500452, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804a949b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.615610 kubelet[1443]: E1002 19:43:29.615513 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804aa6ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331488495, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 612509519, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804aa6ef" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.618320 kubelet[1443]: E1002 19:43:29.618236 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804abf92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331494802, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 612512634, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804abf92" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:29.931994 kubelet[1443]: E1002 19:43:29.931857 1443 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:43:30.016526 kubelet[1443]: I1002 19:43:30.015745 1443 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:43:30.017022 kubelet[1443]: E1002 19:43:30.016963 1443 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:43:30.017090 kubelet[1443]: E1002 19:43:30.017014 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804a949b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331483803, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 30, 15706377, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804a949b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:30.018637 kubelet[1443]: E1002 19:43:30.018511 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804aa6ef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331488495, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 30, 15712697, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804aa6ef" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:30.019490 kubelet[1443]: E1002 19:43:30.019410 1443 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a61dc804abf92", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 29, 331494802, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 30, 15715975, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.0.0.13"}': 'events "10.0.0.13.178a61dc804abf92" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:30.284908 kubelet[1443]: W1002 19:43:30.284807 1443 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:30.284908 kubelet[1443]: E1002 19:43:30.284838 1443 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:30.292907 kubelet[1443]: W1002 19:43:30.292884 1443 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:30.292907 kubelet[1443]: E1002 19:43:30.292904 1443 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:30.294974 kubelet[1443]: I1002 19:43:30.294953 1443 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:43:30.306753 kubelet[1443]: E1002 19:43:30.305155 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:30.685688 kubelet[1443]: E1002 19:43:30.685588 1443 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.13" not found Oct 2 19:43:30.740917 kubelet[1443]: E1002 19:43:30.740885 1443 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.13\" not found" node="10.0.0.13" Oct 2 19:43:30.818142 kubelet[1443]: I1002 19:43:30.818111 1443 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:43:30.822157 kubelet[1443]: I1002 19:43:30.822124 1443 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.13" Oct 2 19:43:30.830645 kubelet[1443]: E1002 19:43:30.830615 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:30.867820 sudo[1269]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:30.867000 audit[1269]: USER_END pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:30.867000 audit[1269]: CRED_DISP pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:30.869253 sshd[1265]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:30.869000 audit[1265]: USER_END pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:30.869000 audit[1265]: CRED_DISP pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:30.871855 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:39524.service: Deactivated successfully. Oct 2 19:43:30.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:39524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:30.872593 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:43:30.873154 systemd-logind[1132]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:43:30.873966 systemd-logind[1132]: Removed session 7. Oct 2 19:43:30.931314 kubelet[1443]: E1002 19:43:30.931248 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.031945 kubelet[1443]: E1002 19:43:31.031708 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.132656 kubelet[1443]: E1002 19:43:31.132592 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.233267 kubelet[1443]: E1002 19:43:31.233146 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.305574 kubelet[1443]: E1002 19:43:31.305472 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:31.333996 kubelet[1443]: E1002 19:43:31.333966 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.434637 kubelet[1443]: E1002 19:43:31.434593 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.535084 kubelet[1443]: E1002 19:43:31.535045 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.635681 kubelet[1443]: E1002 19:43:31.635579 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.736148 kubelet[1443]: E1002 19:43:31.736102 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.836900 kubelet[1443]: E1002 19:43:31.836842 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:31.937418 kubelet[1443]: E1002 19:43:31.937303 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.037791 kubelet[1443]: E1002 19:43:32.037740 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.138229 kubelet[1443]: E1002 19:43:32.138180 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.238733 kubelet[1443]: E1002 19:43:32.238637 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.306079 kubelet[1443]: E1002 19:43:32.306040 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:32.339601 kubelet[1443]: E1002 19:43:32.339562 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.440258 kubelet[1443]: E1002 19:43:32.440214 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.540917 kubelet[1443]: E1002 19:43:32.540800 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.641469 kubelet[1443]: E1002 19:43:32.641404 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.742575 kubelet[1443]: E1002 19:43:32.742511 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.843218 kubelet[1443]: E1002 19:43:32.843095 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:32.943473 kubelet[1443]: E1002 19:43:32.943420 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:33.043931 kubelet[1443]: E1002 19:43:33.043895 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:33.144881 kubelet[1443]: E1002 19:43:33.144775 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:33.245394 kubelet[1443]: E1002 19:43:33.245342 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:33.306817 kubelet[1443]: E1002 19:43:33.306781 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:33.346339 kubelet[1443]: E1002 19:43:33.346301 1443 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:43:33.447501 kubelet[1443]: I1002 19:43:33.447396 1443 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:43:33.448061 env[1142]: time="2023-10-02T19:43:33.447963047Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:43:33.448322 kubelet[1443]: I1002 19:43:33.448227 1443 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:43:34.307682 kubelet[1443]: E1002 19:43:34.307624 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.308074 kubelet[1443]: I1002 19:43:34.307731 1443 apiserver.go:52] "Watching apiserver" Oct 2 19:43:34.311380 kubelet[1443]: I1002 19:43:34.311341 1443 topology_manager.go:215] "Topology Admit Handler" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" podNamespace="kube-system" podName="cilium-wlkps" Oct 2 19:43:34.311499 kubelet[1443]: I1002 19:43:34.311488 1443 topology_manager.go:215] "Topology Admit Handler" podUID="c0ead1f8-8609-4d8c-8367-c30fb94eefb3" podNamespace="kube-system" podName="kube-proxy-dqgms" Oct 2 19:43:34.317579 systemd[1]: Created slice kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice. Oct 2 19:43:34.337620 systemd[1]: Created slice kubepods-besteffort-podc0ead1f8_8609_4d8c_8367_c30fb94eefb3.slice. Oct 2 19:43:34.410397 kubelet[1443]: I1002 19:43:34.410367 1443 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:43:34.439714 kubelet[1443]: I1002 19:43:34.439685 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-bpf-maps\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.439895 kubelet[1443]: I1002 19:43:34.439884 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-net\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.439981 kubelet[1443]: I1002 19:43:34.439971 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-hubble-tls\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440108 kubelet[1443]: I1002 19:43:34.440069 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0ead1f8-8609-4d8c-8367-c30fb94eefb3-kube-proxy\") pod \"kube-proxy-dqgms\" (UID: \"c0ead1f8-8609-4d8c-8367-c30fb94eefb3\") " pod="kube-system/kube-proxy-dqgms" Oct 2 19:43:34.440155 kubelet[1443]: I1002 19:43:34.440119 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-run\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440155 kubelet[1443]: I1002 19:43:34.440141 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-hostproc\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440212 kubelet[1443]: I1002 19:43:34.440160 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cni-path\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440212 kubelet[1443]: I1002 19:43:34.440189 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-xtables-lock\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440259 kubelet[1443]: I1002 19:43:34.440216 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7ms9\" (UniqueName: \"kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-kube-api-access-d7ms9\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440259 kubelet[1443]: I1002 19:43:34.440235 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-etc-cni-netd\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440306 kubelet[1443]: I1002 19:43:34.440266 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-config-path\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440306 kubelet[1443]: I1002 19:43:34.440305 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-kernel\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440351 kubelet[1443]: I1002 19:43:34.440338 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ead1f8-8609-4d8c-8367-c30fb94eefb3-xtables-lock\") pod \"kube-proxy-dqgms\" (UID: \"c0ead1f8-8609-4d8c-8367-c30fb94eefb3\") " pod="kube-system/kube-proxy-dqgms" Oct 2 19:43:34.440375 kubelet[1443]: I1002 19:43:34.440359 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlk7x\" (UniqueName: \"kubernetes.io/projected/c0ead1f8-8609-4d8c-8367-c30fb94eefb3-kube-api-access-jlk7x\") pod \"kube-proxy-dqgms\" (UID: \"c0ead1f8-8609-4d8c-8367-c30fb94eefb3\") " pod="kube-system/kube-proxy-dqgms" Oct 2 19:43:34.440409 kubelet[1443]: I1002 19:43:34.440376 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-cgroup\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440502 kubelet[1443]: I1002 19:43:34.440415 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-lib-modules\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440502 kubelet[1443]: I1002 19:43:34.440433 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1ed0e1a-951e-4981-a148-7f66eae3559e-clustermesh-secrets\") pod \"cilium-wlkps\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " pod="kube-system/cilium-wlkps" Oct 2 19:43:34.440502 kubelet[1443]: I1002 19:43:34.440466 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ead1f8-8609-4d8c-8367-c30fb94eefb3-lib-modules\") pod \"kube-proxy-dqgms\" (UID: \"c0ead1f8-8609-4d8c-8367-c30fb94eefb3\") " pod="kube-system/kube-proxy-dqgms" Oct 2 19:43:34.636429 kubelet[1443]: E1002 19:43:34.636302 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:34.638055 env[1142]: time="2023-10-02T19:43:34.637419306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlkps,Uid:b1ed0e1a-951e-4981-a148-7f66eae3559e,Namespace:kube-system,Attempt:0,}" Oct 2 19:43:34.657333 kubelet[1443]: E1002 19:43:34.657296 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:34.658328 env[1142]: time="2023-10-02T19:43:34.657984959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqgms,Uid:c0ead1f8-8609-4d8c-8367-c30fb94eefb3,Namespace:kube-system,Attempt:0,}" Oct 2 19:43:35.209625 env[1142]: time="2023-10-02T19:43:35.209574473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.210471 env[1142]: time="2023-10-02T19:43:35.210428555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.211816 env[1142]: time="2023-10-02T19:43:35.211792677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.213349 env[1142]: time="2023-10-02T19:43:35.213313896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.214554 env[1142]: time="2023-10-02T19:43:35.214502600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.216966 env[1142]: time="2023-10-02T19:43:35.216933301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.223732 env[1142]: time="2023-10-02T19:43:35.223693942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.225938 env[1142]: time="2023-10-02T19:43:35.225902727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:35.254006 env[1142]: time="2023-10-02T19:43:35.253911283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:43:35.254006 env[1142]: time="2023-10-02T19:43:35.253960364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:43:35.254006 env[1142]: time="2023-10-02T19:43:35.253971690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:43:35.254190 env[1142]: time="2023-10-02T19:43:35.254043980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:43:35.254190 env[1142]: time="2023-10-02T19:43:35.254098783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:43:35.254190 env[1142]: time="2023-10-02T19:43:35.254127158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:43:35.254335 env[1142]: time="2023-10-02T19:43:35.254270148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4 pid=1503 runtime=io.containerd.runc.v2 Oct 2 19:43:35.254335 env[1142]: time="2023-10-02T19:43:35.254264703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/405709cc56f90d3db69d4fc1f3ba0554d586c52891f71d5ff16517df08fa0e54 pid=1505 runtime=io.containerd.runc.v2 Oct 2 19:43:35.272242 systemd[1]: Started cri-containerd-0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4.scope. Oct 2 19:43:35.281473 systemd[1]: Started cri-containerd-405709cc56f90d3db69d4fc1f3ba0554d586c52891f71d5ff16517df08fa0e54.scope. Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.306119 kernel: kauditd_printk_skb: 416 callbacks suppressed Oct 2 19:43:35.306189 kernel: audit: type=1400 audit(1696275815.302:545): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.306266 kernel: audit: type=1400 audit(1696275815.302:546): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.308221 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:43:35.308280 kernel: audit: type=1400 audit(1696275815.302:547): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.308307 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:43:35.308322 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:43:35.308339 kernel: audit: backlog limit exceeded Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.308392 kubelet[1443]: E1002 19:43:35.307751 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:35.310453 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:43:35.310501 kernel: audit: type=1400 audit(1696275815.302:548): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.311456 kernel: audit: backlog limit exceeded Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit: BPF prog-id=61 op=LOAD Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=1503 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066646634636435333831363730333837316463353131646565306130 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=1503 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066646634636435333831363730333837316463353131646565306130 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.302000 audit: BPF prog-id=62 op=LOAD Oct 2 19:43:35.305000 audit: BPF prog-id=63 op=LOAD Oct 2 19:43:35.302000 audit[1523]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=1503 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066646634636435333831363730333837316463353131646565306130 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=1505 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353730396363353666393064336462363964346663316633626130 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=1505 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353730396363353666393064336462363964346663316633626130 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1523]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=1503 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066646634636435333831363730333837316463353131646565306130 Oct 2 19:43:35.309000 audit: BPF prog-id=64 op=UNLOAD Oct 2 19:43:35.309000 audit: BPF prog-id=62 op=UNLOAD Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { perfmon } for pid=1523 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit[1523]: AVC avc: denied { bpf } for pid=1523 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.309000 audit: BPF prog-id=65 op=LOAD Oct 2 19:43:35.309000 audit[1523]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=1503 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066646634636435333831363730333837316463353131646565306130 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.305000 audit: BPF prog-id=66 op=LOAD Oct 2 19:43:35.305000 audit[1524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=1505 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.305000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353730396363353666393064336462363964346663316633626130 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit: BPF prog-id=67 op=LOAD Oct 2 19:43:35.312000 audit[1524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=1505 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353730396363353666393064336462363964346663316633626130 Oct 2 19:43:35.312000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:43:35.312000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { perfmon } for pid=1524 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit[1524]: AVC avc: denied { bpf } for pid=1524 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:35.312000 audit: BPF prog-id=68 op=LOAD Oct 2 19:43:35.312000 audit[1524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=1505 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:35.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430353730396363353666393064336462363964346663316633626130 Oct 2 19:43:35.329863 env[1142]: time="2023-10-02T19:43:35.329819622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlkps,Uid:b1ed0e1a-951e-4981-a148-7f66eae3559e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\"" Oct 2 19:43:35.331910 env[1142]: time="2023-10-02T19:43:35.331872422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqgms,Uid:c0ead1f8-8609-4d8c-8367-c30fb94eefb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"405709cc56f90d3db69d4fc1f3ba0554d586c52891f71d5ff16517df08fa0e54\"" Oct 2 19:43:35.332625 kubelet[1443]: E1002 19:43:35.331999 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:35.334175 env[1142]: time="2023-10-02T19:43:35.334144475Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:43:35.334264 kubelet[1443]: E1002 19:43:35.334159 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:35.548760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877734871.mount: Deactivated successfully. Oct 2 19:43:36.308078 kubelet[1443]: E1002 19:43:36.308047 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:37.308791 kubelet[1443]: E1002 19:43:37.308748 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:38.309738 kubelet[1443]: E1002 19:43:38.309647 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:39.118396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103473922.mount: Deactivated successfully. Oct 2 19:43:39.310075 kubelet[1443]: E1002 19:43:39.310016 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:40.310875 kubelet[1443]: E1002 19:43:40.310833 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:41.311178 kubelet[1443]: E1002 19:43:41.311138 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:41.440947 env[1142]: time="2023-10-02T19:43:41.440899182Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:41.442146 env[1142]: time="2023-10-02T19:43:41.442112411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:41.443591 env[1142]: time="2023-10-02T19:43:41.443560636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:41.444825 env[1142]: time="2023-10-02T19:43:41.444793767Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:43:41.445486 env[1142]: time="2023-10-02T19:43:41.445458153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:43:41.447292 env[1142]: time="2023-10-02T19:43:41.447250577Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:43:41.459027 env[1142]: time="2023-10-02T19:43:41.458975293Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\"" Oct 2 19:43:41.459973 env[1142]: time="2023-10-02T19:43:41.459883969Z" level=info msg="StartContainer for \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\"" Oct 2 19:43:41.478677 systemd[1]: Started cri-containerd-083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0.scope. Oct 2 19:43:41.498461 systemd[1]: cri-containerd-083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0.scope: Deactivated successfully. Oct 2 19:43:41.502585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0-rootfs.mount: Deactivated successfully. Oct 2 19:43:41.704931 env[1142]: time="2023-10-02T19:43:41.704808546Z" level=info msg="shim disconnected" id=083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0 Oct 2 19:43:41.704931 env[1142]: time="2023-10-02T19:43:41.704862868Z" level=warning msg="cleaning up after shim disconnected" id=083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0 namespace=k8s.io Oct 2 19:43:41.704931 env[1142]: time="2023-10-02T19:43:41.704873118Z" level=info msg="cleaning up dead shim" Oct 2 19:43:41.714730 env[1142]: time="2023-10-02T19:43:41.714682688Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1609 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:41.715019 env[1142]: time="2023-10-02T19:43:41.714929370Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:43:41.716596 env[1142]: time="2023-10-02T19:43:41.716551091Z" level=error msg="Failed to pipe stderr of container \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\"" error="reading from a closed fifo" Oct 2 19:43:41.717547 env[1142]: time="2023-10-02T19:43:41.717511894Z" level=error msg="Failed to pipe stdout of container \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\"" error="reading from a closed fifo" Oct 2 19:43:41.719312 env[1142]: time="2023-10-02T19:43:41.719252469Z" level=error msg="StartContainer for \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:41.719585 kubelet[1443]: E1002 19:43:41.719555 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0" Oct 2 19:43:41.719922 kubelet[1443]: E1002 19:43:41.719896 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:41.719922 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:41.719922 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:43:41.720030 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7ms9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:41.720030 kubelet[1443]: E1002 19:43:41.719945 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:43:42.311588 kubelet[1443]: E1002 19:43:42.311521 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:42.429102 kubelet[1443]: E1002 19:43:42.429065 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:42.431526 env[1142]: time="2023-10-02T19:43:42.431473288Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:43:42.450861 env[1142]: time="2023-10-02T19:43:42.450813973Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\"" Oct 2 19:43:42.451686 env[1142]: time="2023-10-02T19:43:42.451637517Z" level=info msg="StartContainer for \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\"" Oct 2 19:43:42.479674 systemd[1]: Started cri-containerd-cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e.scope. Oct 2 19:43:42.503924 systemd[1]: cri-containerd-cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e.scope: Deactivated successfully. Oct 2 19:43:42.507348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e-rootfs.mount: Deactivated successfully. Oct 2 19:43:42.527673 env[1142]: time="2023-10-02T19:43:42.527616381Z" level=info msg="shim disconnected" id=cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e Oct 2 19:43:42.527673 env[1142]: time="2023-10-02T19:43:42.527670204Z" level=warning msg="cleaning up after shim disconnected" id=cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e namespace=k8s.io Oct 2 19:43:42.527673 env[1142]: time="2023-10-02T19:43:42.527680737Z" level=info msg="cleaning up dead shim" Oct 2 19:43:42.537585 env[1142]: time="2023-10-02T19:43:42.537525276Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1648 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:42.537841 env[1142]: time="2023-10-02T19:43:42.537780108Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:43:42.540558 env[1142]: time="2023-10-02T19:43:42.540503615Z" level=error msg="Failed to pipe stdout of container \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\"" error="reading from a closed fifo" Oct 2 19:43:42.540634 env[1142]: time="2023-10-02T19:43:42.540594463Z" level=error msg="Failed to pipe stderr of container \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\"" error="reading from a closed fifo" Oct 2 19:43:42.542554 env[1142]: time="2023-10-02T19:43:42.542503204Z" level=error msg="StartContainer for \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:42.542933 kubelet[1443]: E1002 19:43:42.542749 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e" Oct 2 19:43:42.542933 kubelet[1443]: E1002 19:43:42.542861 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:42.542933 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:42.542933 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:43:42.542933 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7ms9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:42.542933 kubelet[1443]: E1002 19:43:42.542901 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:43:42.707086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093018391.mount: Deactivated successfully. Oct 2 19:43:43.106095 env[1142]: time="2023-10-02T19:43:43.105990504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:43.111915 env[1142]: time="2023-10-02T19:43:43.111863387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:43.113102 env[1142]: time="2023-10-02T19:43:43.113069381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:43.114680 env[1142]: time="2023-10-02T19:43:43.114648425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:43.114999 env[1142]: time="2023-10-02T19:43:43.114967874Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa\"" Oct 2 19:43:43.116812 env[1142]: time="2023-10-02T19:43:43.116773733Z" level=info msg="CreateContainer within sandbox \"405709cc56f90d3db69d4fc1f3ba0554d586c52891f71d5ff16517df08fa0e54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:43:43.127222 env[1142]: time="2023-10-02T19:43:43.127163996Z" level=info msg="CreateContainer within sandbox \"405709cc56f90d3db69d4fc1f3ba0554d586c52891f71d5ff16517df08fa0e54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29a0f1efe58b68f38e5a670e8c8617135cf08b95db1051ec66eb22a7d0cd920b\"" Oct 2 19:43:43.127606 env[1142]: time="2023-10-02T19:43:43.127582265Z" level=info msg="StartContainer for \"29a0f1efe58b68f38e5a670e8c8617135cf08b95db1051ec66eb22a7d0cd920b\"" Oct 2 19:43:43.143750 systemd[1]: Started cri-containerd-29a0f1efe58b68f38e5a670e8c8617135cf08b95db1051ec66eb22a7d0cd920b.scope. Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172916 kernel: kauditd_printk_skb: 108 callbacks suppressed Oct 2 19:43:43.172988 kernel: audit: type=1400 audit(1696275823.170:581): avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.173012 kernel: audit: type=1300 audit(1696275823.170:581): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1505 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.170000 audit[1667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1505 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239613066316566653538623638663338653561363730653863383631 Oct 2 19:43:43.177873 kernel: audit: type=1327 audit(1696275823.170:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239613066316566653538623638663338653561363730653863383631 Oct 2 19:43:43.177944 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.179487 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.182655 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.182701 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.186447 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.188253 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.188303 kernel: audit: type=1400 audit(1696275823.170:582): avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit: BPF prog-id=69 op=LOAD Oct 2 19:43:43.170000 audit[1667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1505 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239613066316566653538623638663338653561363730653863383631 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.170000 audit: BPF prog-id=70 op=LOAD Oct 2 19:43:43.170000 audit[1667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1505 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239613066316566653538623638663338653561363730653863383631 Oct 2 19:43:43.172000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:43:43.172000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { perfmon } for pid=1667 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit[1667]: AVC avc: denied { bpf } for pid=1667 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:43.172000 audit: BPF prog-id=71 op=LOAD Oct 2 19:43:43.172000 audit[1667]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1505 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239613066316566653538623638663338653561363730653863383631 Oct 2 19:43:43.199653 env[1142]: time="2023-10-02T19:43:43.199550091Z" level=info msg="StartContainer for \"29a0f1efe58b68f38e5a670e8c8617135cf08b95db1051ec66eb22a7d0cd920b\" returns successfully" Oct 2 19:43:43.311844 kubelet[1443]: E1002 19:43:43.311805 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:43.324000 audit[1720]: NETFILTER_CFG table=mangle:14 family=10 entries=1 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.324000 audit[1720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc36f5bc0 a2=0 a3=ffff9f9f66c0 items=0 ppid=1677 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:43:43.326000 audit[1719]: NETFILTER_CFG table=mangle:15 family=2 entries=1 op=nft_register_chain pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.326000 audit[1719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff833ff50 a2=0 a3=ffff8d5516c0 items=0 ppid=1677 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:43:43.327000 audit[1723]: NETFILTER_CFG table=nat:16 family=10 entries=1 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.327000 audit[1723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea7e1e80 a2=0 a3=ffffbb1a56c0 items=0 ppid=1677 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:43:43.328000 audit[1724]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=1724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.328000 audit[1724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff104a560 a2=0 a3=ffffb8aad6c0 items=0 ppid=1677 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:43:43.328000 audit[1725]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.328000 audit[1725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6cc6980 a2=0 a3=ffff95fac6c0 items=0 ppid=1677 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:43:43.329000 audit[1726]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_chain pid=1726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.329000 audit[1726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff78e9700 a2=0 a3=ffffa14946c0 items=0 ppid=1677 pid=1726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:43:43.425000 audit[1727]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.425000 audit[1727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc54a3cd0 a2=0 a3=ffff8869c6c0 items=0 ppid=1677 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:43:43.428000 audit[1729]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1729 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.428000 audit[1729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe304c250 a2=0 a3=ffff82ca56c0 items=0 ppid=1677 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:43:43.433385 kubelet[1443]: E1002 19:43:43.433327 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:43.435552 kubelet[1443]: I1002 19:43:43.435526 1443 scope.go:117] "RemoveContainer" containerID="083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0" Oct 2 19:43:43.435916 kubelet[1443]: I1002 19:43:43.435896 1443 scope.go:117] "RemoveContainer" containerID="083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0" Oct 2 19:43:43.437560 env[1142]: time="2023-10-02T19:43:43.437514567Z" level=info msg="RemoveContainer for \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\"" Oct 2 19:43:43.438405 env[1142]: time="2023-10-02T19:43:43.438376328Z" level=info msg="RemoveContainer for \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\"" Oct 2 19:43:43.438666 env[1142]: time="2023-10-02T19:43:43.438630482Z" level=error msg="RemoveContainer for \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\" failed" error="failed to set removing state for container \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\": container is already in removing state" Oct 2 19:43:43.439222 kubelet[1443]: E1002 19:43:43.439203 1443 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\": container is already in removing state" containerID="083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0" Oct 2 19:43:43.439263 kubelet[1443]: E1002 19:43:43.439254 1443 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0": container is already in removing state; Skipping pod "cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)" Oct 2 19:43:43.439321 kubelet[1443]: E1002 19:43:43.439310 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:43.439563 kubelet[1443]: E1002 19:43:43.439548 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:43:43.440000 audit[1732]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.440000 audit[1732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdad26030 a2=0 a3=ffffbe1f76c0 items=0 ppid=1677 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.440000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:43:43.441000 audit[1733]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1733 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.441000 audit[1733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef751c40 a2=0 a3=ffffbf68d6c0 items=0 ppid=1677 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:43:43.444000 audit[1735]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.444000 audit[1735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd50b49a0 a2=0 a3=ffff906f06c0 items=0 ppid=1677 pid=1735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.444000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:43:43.445000 audit[1736]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.445000 audit[1736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2636680 a2=0 a3=ffffb79e26c0 items=0 ppid=1677 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:43:43.450134 env[1142]: time="2023-10-02T19:43:43.449835731Z" level=info msg="RemoveContainer for \"083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0\" returns successfully" Oct 2 19:43:43.450332 kubelet[1443]: I1002 19:43:43.450262 1443 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dqgms" podStartSLOduration=5.669630722 podCreationTimestamp="2023-10-02 19:43:30 +0000 UTC" firstStartedPulling="2023-10-02 19:43:35.33466612 +0000 UTC m=+6.981765321" lastFinishedPulling="2023-10-02 19:43:43.115251323 +0000 UTC m=+14.762354275" observedRunningTime="2023-10-02 19:43:43.449166062 +0000 UTC m=+15.096265263" watchObservedRunningTime="2023-10-02 19:43:43.450219676 +0000 UTC m=+15.097318877" Oct 2 19:43:43.450000 audit[1738]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1738 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.450000 audit[1738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcc1153a0 a2=0 a3=ffff9cd796c0 items=0 ppid=1677 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:43:43.455000 audit[1741]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.455000 audit[1741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff567bb10 a2=0 a3=ffffba2056c0 items=0 ppid=1677 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:43:43.456000 audit[1742]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1742 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.456000 audit[1742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5e97b20 a2=0 a3=ffffb8c126c0 items=0 ppid=1677 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:43:43.459000 audit[1744]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.459000 audit[1744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffae25ee0 a2=0 a3=ffff808ac6c0 items=0 ppid=1677 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:43:43.460000 audit[1745]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.460000 audit[1745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcedd4480 a2=0 a3=ffffafa906c0 items=0 ppid=1677 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:43:43.464000 audit[1747]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.464000 audit[1747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff74ccff0 a2=0 a3=ffffa71976c0 items=0 ppid=1677 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:43:43.468000 audit[1750]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.468000 audit[1750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd671450 a2=0 a3=ffffbe2ca6c0 items=0 ppid=1677 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:43:43.472000 audit[1753]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.472000 audit[1753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffedba9330 a2=0 a3=ffffa4eb96c0 items=0 ppid=1677 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:43:43.473000 audit[1754]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.473000 audit[1754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd0772ff0 a2=0 a3=ffff9d6ce6c0 items=0 ppid=1677 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:43:43.476000 audit[1756]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.476000 audit[1756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd5e55a50 a2=0 a3=ffffab9966c0 items=0 ppid=1677 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:43.501000 audit[1762]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.501000 audit[1762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffcd48ab40 a2=0 a3=ffffab6316c0 items=0 ppid=1677 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:43.502000 audit[1763]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.502000 audit[1763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdec1a450 a2=0 a3=ffffa24e46c0 items=0 ppid=1677 pid=1763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:43:43.504000 audit[1765]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:43.504000 audit[1765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffff7a1310 a2=0 a3=ffff862196c0 items=0 ppid=1677 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:43:43.520000 audit[1771]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:43:43.520000 audit[1771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4956 a0=3 a1=ffffd4abce90 a2=0 a3=ffff94a7c6c0 items=0 ppid=1677 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:43.531000 audit[1771]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:43:43.531000 audit[1771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd4abce90 a2=0 a3=ffff94a7c6c0 items=0 ppid=1677 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:43.533000 audit[1777]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.533000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe4903240 a2=0 a3=ffffb70cc6c0 items=0 ppid=1677 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.533000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:43:43.535000 audit[1779]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.535000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcace10a0 a2=0 a3=ffffbaf746c0 items=0 ppid=1677 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:43:43.539000 audit[1782]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.539000 audit[1782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe6a38f60 a2=0 a3=ffff9cb1f6c0 items=0 ppid=1677 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:43:43.540000 audit[1783]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.540000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd1f1610 a2=0 a3=ffff8df556c0 items=0 ppid=1677 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:43:43.542000 audit[1785]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.542000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc28708a0 a2=0 a3=ffffbeda76c0 items=0 ppid=1677 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:43:43.543000 audit[1786]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.543000 audit[1786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5ec1560 a2=0 a3=ffffb685f6c0 items=0 ppid=1677 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:43:43.546000 audit[1788]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.546000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcbabb3e0 a2=0 a3=ffffb55e16c0 items=0 ppid=1677 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:43:43.550000 audit[1791]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.550000 audit[1791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe1f69b20 a2=0 a3=ffff958586c0 items=0 ppid=1677 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:43:43.551000 audit[1792]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.551000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd623d450 a2=0 a3=ffff9bac56c0 items=0 ppid=1677 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:43:43.554000 audit[1794]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.554000 audit[1794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdb66f7e0 a2=0 a3=ffff962a86c0 items=0 ppid=1677 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:43:43.555000 audit[1795]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.555000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc30420b0 a2=0 a3=ffffadb8b6c0 items=0 ppid=1677 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:43:43.558000 audit[1797]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.558000 audit[1797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb28d1a0 a2=0 a3=ffff8fd0f6c0 items=0 ppid=1677 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:43:43.562000 audit[1800]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.562000 audit[1800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7e8c830 a2=0 a3=ffffa09806c0 items=0 ppid=1677 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:43:43.565000 audit[1803]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.565000 audit[1803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2cfb9e0 a2=0 a3=ffffbbdd16c0 items=0 ppid=1677 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:43:43.566000 audit[1804]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.566000 audit[1804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe4570df0 a2=0 a3=ffff9d4976c0 items=0 ppid=1677 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:43:43.569000 audit[1806]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.569000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffde89b600 a2=0 a3=ffffb8a716c0 items=0 ppid=1677 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:43.573000 audit[1809]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.573000 audit[1809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff8ffd5f0 a2=0 a3=ffffac00a6c0 items=0 ppid=1677 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:43.574000 audit[1810]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.574000 audit[1810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5035cf0 a2=0 a3=ffff919b26c0 items=0 ppid=1677 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:43:43.577000 audit[1812]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.577000 audit[1812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe3196f80 a2=0 a3=ffff87d1a6c0 items=0 ppid=1677 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:43:43.578000 audit[1813]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1813 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.578000 audit[1813]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd93ec960 a2=0 a3=ffffac1796c0 items=0 ppid=1677 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:43:43.580000 audit[1815]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=1815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.580000 audit[1815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc81ea450 a2=0 a3=ffffa15f36c0 items=0 ppid=1677 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:43:43.583000 audit[1818]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:43.583000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff6f33bf0 a2=0 a3=ffffbccb56c0 items=0 ppid=1677 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:43:43.586000 audit[1820]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:43:43.586000 audit[1820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe59f75b0 a2=0 a3=ffffa4fa86c0 items=0 ppid=1677 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.586000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:43.587000 audit[1820]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:43:43.587000 audit[1820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffe59f75b0 a2=0 a3=ffffa4fa86c0 items=0 ppid=1677 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:43.587000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:44.312642 kubelet[1443]: E1002 19:43:44.312606 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:44.438394 kubelet[1443]: E1002 19:43:44.438360 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:44.438557 kubelet[1443]: E1002 19:43:44.438359 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:44.438635 kubelet[1443]: E1002 19:43:44.438617 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:43:44.809680 kubelet[1443]: W1002 19:43:44.809573 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice/cri-containerd-083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0.scope WatchSource:0}: container "083ba778112fba6939910e3a65da8d500c515b1186972df7f293159f243357d0" in namespace "k8s.io": not found Oct 2 19:43:45.313851 kubelet[1443]: E1002 19:43:45.313823 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:46.314702 kubelet[1443]: E1002 19:43:46.314644 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:47.315086 kubelet[1443]: E1002 19:43:47.315052 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:47.916332 kubelet[1443]: W1002 19:43:47.916296 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice/cri-containerd-cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e.scope WatchSource:0}: task cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e not found: not found Oct 2 19:43:48.315449 kubelet[1443]: E1002 19:43:48.315314 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.304792 kubelet[1443]: E1002 19:43:49.304731 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.316055 kubelet[1443]: E1002 19:43:49.316029 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:50.316193 kubelet[1443]: E1002 19:43:50.316154 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:51.316927 kubelet[1443]: E1002 19:43:51.316858 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:52.317965 kubelet[1443]: E1002 19:43:52.317896 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:53.319039 kubelet[1443]: E1002 19:43:53.318981 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.319685 kubelet[1443]: E1002 19:43:54.319608 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:55.321067 kubelet[1443]: E1002 19:43:55.320075 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:55.408571 kubelet[1443]: E1002 19:43:55.407892 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:55.410431 env[1142]: time="2023-10-02T19:43:55.410383532Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:43:55.436970 env[1142]: time="2023-10-02T19:43:55.436812460Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\"" Oct 2 19:43:55.438160 env[1142]: time="2023-10-02T19:43:55.437349660Z" level=info msg="StartContainer for \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\"" Oct 2 19:43:55.460168 systemd[1]: Started cri-containerd-2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5.scope. Oct 2 19:43:55.489801 systemd[1]: cri-containerd-2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5.scope: Deactivated successfully. Oct 2 19:43:55.601472 env[1142]: time="2023-10-02T19:43:55.601348596Z" level=info msg="shim disconnected" id=2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5 Oct 2 19:43:55.601686 env[1142]: time="2023-10-02T19:43:55.601666933Z" level=warning msg="cleaning up after shim disconnected" id=2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5 namespace=k8s.io Oct 2 19:43:55.601744 env[1142]: time="2023-10-02T19:43:55.601731744Z" level=info msg="cleaning up dead shim" Oct 2 19:43:55.609946 env[1142]: time="2023-10-02T19:43:55.609906365Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1845 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:55.610337 env[1142]: time="2023-10-02T19:43:55.610282956Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:43:55.610745 env[1142]: time="2023-10-02T19:43:55.610514332Z" level=error msg="Failed to pipe stdout of container \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\"" error="reading from a closed fifo" Oct 2 19:43:55.610890 env[1142]: time="2023-10-02T19:43:55.610520050Z" level=error msg="Failed to pipe stderr of container \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\"" error="reading from a closed fifo" Oct 2 19:43:55.612488 env[1142]: time="2023-10-02T19:43:55.612449426Z" level=error msg="StartContainer for \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:55.612814 kubelet[1443]: E1002 19:43:55.612780 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5" Oct 2 19:43:55.613361 kubelet[1443]: E1002 19:43:55.613336 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:55.613361 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:55.613361 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:43:55.613361 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7ms9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:55.613518 kubelet[1443]: E1002 19:43:55.613454 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:43:56.321784 kubelet[1443]: E1002 19:43:56.321747 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:56.417826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5-rootfs.mount: Deactivated successfully. Oct 2 19:43:56.460606 kubelet[1443]: I1002 19:43:56.460430 1443 scope.go:117] "RemoveContainer" containerID="cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e" Oct 2 19:43:56.460828 kubelet[1443]: I1002 19:43:56.460805 1443 scope.go:117] "RemoveContainer" containerID="cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e" Oct 2 19:43:56.461992 env[1142]: time="2023-10-02T19:43:56.461945124Z" level=info msg="RemoveContainer for \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\"" Oct 2 19:43:56.463683 env[1142]: time="2023-10-02T19:43:56.462948066Z" level=info msg="RemoveContainer for \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\"" Oct 2 19:43:56.463683 env[1142]: time="2023-10-02T19:43:56.463025388Z" level=error msg="RemoveContainer for \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\" failed" error="failed to set removing state for container \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\": container is already in removing state" Oct 2 19:43:56.463818 kubelet[1443]: E1002 19:43:56.463163 1443 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\": container is already in removing state" containerID="cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e" Oct 2 19:43:56.463818 kubelet[1443]: E1002 19:43:56.463205 1443 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e": container is already in removing state; Skipping pod "cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)" Oct 2 19:43:56.463818 kubelet[1443]: E1002 19:43:56.463267 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:56.463818 kubelet[1443]: E1002 19:43:56.463524 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:43:56.466126 env[1142]: time="2023-10-02T19:43:56.466006614Z" level=info msg="RemoveContainer for \"cc97e283fabfafd8567b9c6c9ad594cda1c49a63cf3a48304e6e6b10326bb45e\" returns successfully" Oct 2 19:43:57.321927 kubelet[1443]: E1002 19:43:57.321874 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.322305 kubelet[1443]: E1002 19:43:58.322268 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.705645 kubelet[1443]: W1002 19:43:58.705546 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice/cri-containerd-2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5.scope WatchSource:0}: task 2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5 not found: not found Oct 2 19:43:59.322502 kubelet[1443]: E1002 19:43:59.322467 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.323159 kubelet[1443]: E1002 19:44:00.323108 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:01.323923 kubelet[1443]: E1002 19:44:01.323882 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:02.324708 kubelet[1443]: E1002 19:44:02.324655 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:03.325109 kubelet[1443]: E1002 19:44:03.325048 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:04.325512 kubelet[1443]: E1002 19:44:04.325454 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:05.333058 kubelet[1443]: E1002 19:44:05.333006 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:06.334074 kubelet[1443]: E1002 19:44:06.334019 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:07.334198 kubelet[1443]: E1002 19:44:07.334136 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:08.335042 kubelet[1443]: E1002 19:44:08.335007 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:08.954200 update_engine[1134]: I1002 19:44:08.953843 1134 update_attempter.cc:505] Updating boot flags... Oct 2 19:44:09.305428 kubelet[1443]: E1002 19:44:09.304667 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:09.335973 kubelet[1443]: E1002 19:44:09.335933 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:10.336331 kubelet[1443]: E1002 19:44:10.336300 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:11.336649 kubelet[1443]: E1002 19:44:11.336607 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:11.408264 kubelet[1443]: E1002 19:44:11.408227 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:11.408529 kubelet[1443]: E1002 19:44:11.408512 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:44:12.337303 kubelet[1443]: E1002 19:44:12.337268 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:13.337754 kubelet[1443]: E1002 19:44:13.337719 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:14.339279 kubelet[1443]: E1002 19:44:14.339236 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:15.339670 kubelet[1443]: E1002 19:44:15.339637 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:16.340797 kubelet[1443]: E1002 19:44:16.340763 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:17.341409 kubelet[1443]: E1002 19:44:17.341360 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:18.342287 kubelet[1443]: E1002 19:44:18.342244 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:19.343204 kubelet[1443]: E1002 19:44:19.343163 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:20.343522 kubelet[1443]: E1002 19:44:20.343487 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:21.344603 kubelet[1443]: E1002 19:44:21.344568 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:22.345983 kubelet[1443]: E1002 19:44:22.345952 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:23.347286 kubelet[1443]: E1002 19:44:23.347248 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:24.348290 kubelet[1443]: E1002 19:44:24.348251 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:25.349672 kubelet[1443]: E1002 19:44:25.349613 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:25.408062 kubelet[1443]: E1002 19:44:25.408033 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:25.410451 env[1142]: time="2023-10-02T19:44:25.410352228Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:44:25.419192 env[1142]: time="2023-10-02T19:44:25.419145076Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\"" Oct 2 19:44:25.420480 env[1142]: time="2023-10-02T19:44:25.419841600Z" level=info msg="StartContainer for \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\"" Oct 2 19:44:25.438265 systemd[1]: Started cri-containerd-be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3.scope. Oct 2 19:44:25.481173 systemd[1]: cri-containerd-be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3.scope: Deactivated successfully. Oct 2 19:44:25.484828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3-rootfs.mount: Deactivated successfully. Oct 2 19:44:25.492057 env[1142]: time="2023-10-02T19:44:25.491969796Z" level=info msg="shim disconnected" id=be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3 Oct 2 19:44:25.492057 env[1142]: time="2023-10-02T19:44:25.492025997Z" level=warning msg="cleaning up after shim disconnected" id=be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3 namespace=k8s.io Oct 2 19:44:25.492057 env[1142]: time="2023-10-02T19:44:25.492037117Z" level=info msg="cleaning up dead shim" Oct 2 19:44:25.500877 env[1142]: time="2023-10-02T19:44:25.500819605Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1900 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:44:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:44:25.501130 env[1142]: time="2023-10-02T19:44:25.501059806Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:44:25.501488 env[1142]: time="2023-10-02T19:44:25.501242047Z" level=error msg="Failed to pipe stderr of container \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\"" error="reading from a closed fifo" Oct 2 19:44:25.501761 env[1142]: time="2023-10-02T19:44:25.501691330Z" level=error msg="Failed to pipe stdout of container \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\"" error="reading from a closed fifo" Oct 2 19:44:25.503956 env[1142]: time="2023-10-02T19:44:25.503880942Z" level=error msg="StartContainer for \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:44:25.504096 kubelet[1443]: E1002 19:44:25.504075 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3" Oct 2 19:44:25.504188 kubelet[1443]: E1002 19:44:25.504175 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:44:25.504188 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:44:25.504188 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:44:25.504188 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7ms9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:44:25.504330 kubelet[1443]: E1002 19:44:25.504215 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:44:25.507078 kubelet[1443]: I1002 19:44:25.506893 1443 scope.go:117] "RemoveContainer" containerID="2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5" Oct 2 19:44:25.508291 env[1142]: time="2023-10-02T19:44:25.508259286Z" level=info msg="RemoveContainer for \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\"" Oct 2 19:44:25.511083 env[1142]: time="2023-10-02T19:44:25.511048461Z" level=info msg="RemoveContainer for \"2c51a7c84343cbbbb516e76ed1d29709220eaf8fb397341816a21e5c82d361f5\" returns successfully" Oct 2 19:44:26.350265 kubelet[1443]: E1002 19:44:26.350223 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:26.510589 kubelet[1443]: E1002 19:44:26.510563 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:26.510946 kubelet[1443]: E1002 19:44:26.510831 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:44:27.350949 kubelet[1443]: E1002 19:44:27.350917 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:28.351894 kubelet[1443]: E1002 19:44:28.351836 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:28.597293 kubelet[1443]: W1002 19:44:28.597256 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice/cri-containerd-be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3.scope WatchSource:0}: task be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3 not found: not found Oct 2 19:44:29.304774 kubelet[1443]: E1002 19:44:29.304724 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:29.353161 kubelet[1443]: E1002 19:44:29.353090 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:30.353284 kubelet[1443]: E1002 19:44:30.353248 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:31.354756 kubelet[1443]: E1002 19:44:31.354674 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:32.354980 kubelet[1443]: E1002 19:44:32.354939 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:33.355868 kubelet[1443]: E1002 19:44:33.355835 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:34.357269 kubelet[1443]: E1002 19:44:34.357194 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:35.357575 kubelet[1443]: E1002 19:44:35.357516 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:36.358040 kubelet[1443]: E1002 19:44:36.358006 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:37.358765 kubelet[1443]: E1002 19:44:37.358720 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:37.407689 kubelet[1443]: E1002 19:44:37.407657 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:37.407932 kubelet[1443]: E1002 19:44:37.407917 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:44:38.360043 kubelet[1443]: E1002 19:44:38.359940 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:39.360868 kubelet[1443]: E1002 19:44:39.360804 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:40.361176 kubelet[1443]: E1002 19:44:40.361122 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:41.362015 kubelet[1443]: E1002 19:44:41.361958 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:42.362775 kubelet[1443]: E1002 19:44:42.362748 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:43.363659 kubelet[1443]: E1002 19:44:43.363619 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:44.364310 kubelet[1443]: E1002 19:44:44.364274 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:45.365382 kubelet[1443]: E1002 19:44:45.365339 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:46.366503 kubelet[1443]: E1002 19:44:46.366465 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:47.367163 kubelet[1443]: E1002 19:44:47.367104 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:48.367535 kubelet[1443]: E1002 19:44:48.367510 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:49.305481 kubelet[1443]: E1002 19:44:49.305433 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:49.367985 kubelet[1443]: E1002 19:44:49.367953 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:49.408766 kubelet[1443]: E1002 19:44:49.408744 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:49.409171 kubelet[1443]: E1002 19:44:49.409150 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:44:50.369356 kubelet[1443]: E1002 19:44:50.369315 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:51.369498 kubelet[1443]: E1002 19:44:51.369430 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:52.369643 kubelet[1443]: E1002 19:44:52.369608 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:53.371254 kubelet[1443]: E1002 19:44:53.371200 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:54.371839 kubelet[1443]: E1002 19:44:54.371802 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:55.372138 kubelet[1443]: E1002 19:44:55.372097 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:56.373027 kubelet[1443]: E1002 19:44:56.372996 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:57.373831 kubelet[1443]: E1002 19:44:57.373762 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:58.374723 kubelet[1443]: E1002 19:44:58.374685 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:59.375379 kubelet[1443]: E1002 19:44:59.375347 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:00.376206 kubelet[1443]: E1002 19:45:00.376165 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:01.377195 kubelet[1443]: E1002 19:45:01.377145 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:01.409213 kubelet[1443]: E1002 19:45:01.408146 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:01.409213 kubelet[1443]: E1002 19:45:01.408373 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:45:02.378358 kubelet[1443]: E1002 19:45:02.378282 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:03.379415 kubelet[1443]: E1002 19:45:03.379367 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:04.379729 kubelet[1443]: E1002 19:45:04.379696 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:05.380139 kubelet[1443]: E1002 19:45:05.380088 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:06.380455 kubelet[1443]: E1002 19:45:06.380412 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:06.407758 kubelet[1443]: E1002 19:45:06.407731 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:07.380858 kubelet[1443]: E1002 19:45:07.380821 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:08.381195 kubelet[1443]: E1002 19:45:08.381158 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:09.305411 kubelet[1443]: E1002 19:45:09.305378 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:09.381249 kubelet[1443]: E1002 19:45:09.381224 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:10.382030 kubelet[1443]: E1002 19:45:10.381984 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:11.382155 kubelet[1443]: E1002 19:45:11.382117 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:12.383088 kubelet[1443]: E1002 19:45:12.383053 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:13.383928 kubelet[1443]: E1002 19:45:13.383897 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:14.384409 kubelet[1443]: E1002 19:45:14.384370 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:14.407989 kubelet[1443]: E1002 19:45:14.407952 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:14.409969 env[1142]: time="2023-10-02T19:45:14.409917182Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:45:14.417321 env[1142]: time="2023-10-02T19:45:14.417271789Z" level=info msg="CreateContainer within sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\"" Oct 2 19:45:14.417768 env[1142]: time="2023-10-02T19:45:14.417740833Z" level=info msg="StartContainer for \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\"" Oct 2 19:45:14.433953 systemd[1]: run-containerd-runc-k8s.io-915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5-runc.WMfzga.mount: Deactivated successfully. Oct 2 19:45:14.435194 systemd[1]: Started cri-containerd-915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5.scope. Oct 2 19:45:14.454544 systemd[1]: cri-containerd-915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5.scope: Deactivated successfully. Oct 2 19:45:14.477264 env[1142]: time="2023-10-02T19:45:14.477201534Z" level=info msg="shim disconnected" id=915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5 Oct 2 19:45:14.477264 env[1142]: time="2023-10-02T19:45:14.477255614Z" level=warning msg="cleaning up after shim disconnected" id=915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5 namespace=k8s.io Oct 2 19:45:14.477264 env[1142]: time="2023-10-02T19:45:14.477265734Z" level=info msg="cleaning up dead shim" Oct 2 19:45:14.485835 env[1142]: time="2023-10-02T19:45:14.485790549Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:45:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1944 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:45:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:45:14.486103 env[1142]: time="2023-10-02T19:45:14.486054870Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:45:14.486261 env[1142]: time="2023-10-02T19:45:14.486218751Z" level=error msg="Failed to pipe stdout of container \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\"" error="reading from a closed fifo" Oct 2 19:45:14.487554 env[1142]: time="2023-10-02T19:45:14.487520640Z" level=error msg="Failed to pipe stderr of container \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\"" error="reading from a closed fifo" Oct 2 19:45:14.491284 env[1142]: time="2023-10-02T19:45:14.491234184Z" level=error msg="StartContainer for \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:45:14.491534 kubelet[1443]: E1002 19:45:14.491501 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5" Oct 2 19:45:14.491639 kubelet[1443]: E1002 19:45:14.491625 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:45:14.491639 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:45:14.491639 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:45:14.491639 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d7ms9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:45:14.491779 kubelet[1443]: E1002 19:45:14.491665 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:45:14.578489 kubelet[1443]: I1002 19:45:14.578464 1443 scope.go:117] "RemoveContainer" containerID="be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3" Oct 2 19:45:14.578762 kubelet[1443]: I1002 19:45:14.578740 1443 scope.go:117] "RemoveContainer" containerID="be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3" Oct 2 19:45:14.579935 env[1142]: time="2023-10-02T19:45:14.579868712Z" level=info msg="RemoveContainer for \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\"" Oct 2 19:45:14.580355 env[1142]: time="2023-10-02T19:45:14.580331555Z" level=info msg="RemoveContainer for \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\"" Oct 2 19:45:14.580441 env[1142]: time="2023-10-02T19:45:14.580408675Z" level=error msg="RemoveContainer for \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\" failed" error="failed to set removing state for container \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\": container is already in removing state" Oct 2 19:45:14.580593 kubelet[1443]: E1002 19:45:14.580562 1443 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\": container is already in removing state" containerID="be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3" Oct 2 19:45:14.580698 kubelet[1443]: E1002 19:45:14.580683 1443 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3": container is already in removing state; Skipping pod "cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)" Oct 2 19:45:14.580826 kubelet[1443]: E1002 19:45:14.580810 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:14.581149 kubelet[1443]: E1002 19:45:14.581135 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:45:14.582847 env[1142]: time="2023-10-02T19:45:14.582806091Z" level=info msg="RemoveContainer for \"be6538c4efc04590d1ad90848be0e7f21339f9f13372b9573bf8d15e5583f2e3\" returns successfully" Oct 2 19:45:15.385374 kubelet[1443]: E1002 19:45:15.385323 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:15.415279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5-rootfs.mount: Deactivated successfully. Oct 2 19:45:16.385965 kubelet[1443]: E1002 19:45:16.385910 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:17.386402 kubelet[1443]: E1002 19:45:17.386360 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:17.580955 kubelet[1443]: W1002 19:45:17.580909 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice/cri-containerd-915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5.scope WatchSource:0}: task 915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5 not found: not found Oct 2 19:45:18.386887 kubelet[1443]: E1002 19:45:18.386830 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:19.387529 kubelet[1443]: E1002 19:45:19.387495 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:20.388307 kubelet[1443]: E1002 19:45:20.388236 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:21.388685 kubelet[1443]: E1002 19:45:21.388657 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:22.390050 kubelet[1443]: E1002 19:45:22.390012 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:23.390952 kubelet[1443]: E1002 19:45:23.390913 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:24.391563 kubelet[1443]: E1002 19:45:24.391524 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:25.391692 kubelet[1443]: E1002 19:45:25.391654 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:26.392616 kubelet[1443]: E1002 19:45:26.392552 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:27.393353 kubelet[1443]: E1002 19:45:27.393304 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:28.394422 kubelet[1443]: E1002 19:45:28.394390 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:29.305064 kubelet[1443]: E1002 19:45:29.305038 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:29.395265 kubelet[1443]: E1002 19:45:29.395204 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:29.404496 kubelet[1443]: E1002 19:45:29.404457 1443 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:45:29.408217 kubelet[1443]: E1002 19:45:29.408192 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:29.408554 kubelet[1443]: E1002 19:45:29.408536 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:45:30.395958 kubelet[1443]: E1002 19:45:30.395908 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:31.396490 kubelet[1443]: E1002 19:45:31.396428 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:32.397532 kubelet[1443]: E1002 19:45:32.397481 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:33.398047 kubelet[1443]: E1002 19:45:33.397983 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:34.383262 kubelet[1443]: E1002 19:45:34.383226 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:34.398396 kubelet[1443]: E1002 19:45:34.398371 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:35.398753 kubelet[1443]: E1002 19:45:35.398701 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:36.399867 kubelet[1443]: E1002 19:45:36.399822 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:37.400009 kubelet[1443]: E1002 19:45:37.399968 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:38.400906 kubelet[1443]: E1002 19:45:38.400843 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:39.384475 kubelet[1443]: E1002 19:45:39.384432 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:39.401923 kubelet[1443]: E1002 19:45:39.401887 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:40.402046 kubelet[1443]: E1002 19:45:40.402009 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:41.403543 kubelet[1443]: E1002 19:45:41.403474 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:42.404398 kubelet[1443]: E1002 19:45:42.404359 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:43.404916 kubelet[1443]: E1002 19:45:43.404860 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:43.407602 kubelet[1443]: E1002 19:45:43.407586 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:43.407854 kubelet[1443]: E1002 19:45:43.407836 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:45:44.385908 kubelet[1443]: E1002 19:45:44.385873 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:44.405147 kubelet[1443]: E1002 19:45:44.405118 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:45.406011 kubelet[1443]: E1002 19:45:45.405969 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:46.406498 kubelet[1443]: E1002 19:45:46.406458 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:47.408049 kubelet[1443]: E1002 19:45:47.407992 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:48.408674 kubelet[1443]: E1002 19:45:48.408616 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:49.305344 kubelet[1443]: E1002 19:45:49.305303 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:49.387270 kubelet[1443]: E1002 19:45:49.387247 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:49.408775 kubelet[1443]: E1002 19:45:49.408749 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:50.409909 kubelet[1443]: E1002 19:45:50.409867 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:51.410450 kubelet[1443]: E1002 19:45:51.410396 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:52.411154 kubelet[1443]: E1002 19:45:52.411080 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:53.412040 kubelet[1443]: E1002 19:45:53.412002 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:54.388760 kubelet[1443]: E1002 19:45:54.388734 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:54.412485 kubelet[1443]: E1002 19:45:54.412432 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:55.413026 kubelet[1443]: E1002 19:45:55.412968 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:56.407692 kubelet[1443]: E1002 19:45:56.407658 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:56.408116 kubelet[1443]: E1002 19:45:56.408097 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:45:56.413703 kubelet[1443]: E1002 19:45:56.413666 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:57.414392 kubelet[1443]: E1002 19:45:57.414349 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:58.415341 kubelet[1443]: E1002 19:45:58.415299 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:59.390185 kubelet[1443]: E1002 19:45:59.390148 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:59.415672 kubelet[1443]: E1002 19:45:59.415638 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:00.417051 kubelet[1443]: E1002 19:46:00.417006 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:01.417509 kubelet[1443]: E1002 19:46:01.417460 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:02.418199 kubelet[1443]: E1002 19:46:02.418133 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:03.418307 kubelet[1443]: E1002 19:46:03.418272 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:04.391055 kubelet[1443]: E1002 19:46:04.391018 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:04.418571 kubelet[1443]: E1002 19:46:04.418521 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:05.419629 kubelet[1443]: E1002 19:46:05.419566 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:06.420631 kubelet[1443]: E1002 19:46:06.420560 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:07.421455 kubelet[1443]: E1002 19:46:07.421402 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:08.408075 kubelet[1443]: E1002 19:46:08.408037 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:08.408369 kubelet[1443]: E1002 19:46:08.408344 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:46:08.422205 kubelet[1443]: E1002 19:46:08.422167 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:09.304551 kubelet[1443]: E1002 19:46:09.304504 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:09.392021 kubelet[1443]: E1002 19:46:09.391992 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:09.422282 kubelet[1443]: E1002 19:46:09.422252 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:10.422719 kubelet[1443]: E1002 19:46:10.422686 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:11.423760 kubelet[1443]: E1002 19:46:11.423707 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:12.424123 kubelet[1443]: E1002 19:46:12.424070 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:13.424809 kubelet[1443]: E1002 19:46:13.424763 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:14.393401 kubelet[1443]: E1002 19:46:14.393365 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:14.425869 kubelet[1443]: E1002 19:46:14.425826 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:15.426963 kubelet[1443]: E1002 19:46:15.426923 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:16.427706 kubelet[1443]: E1002 19:46:16.427650 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:17.428509 kubelet[1443]: E1002 19:46:17.428476 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:18.407919 kubelet[1443]: E1002 19:46:18.407888 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:18.429384 kubelet[1443]: E1002 19:46:18.429352 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:19.394192 kubelet[1443]: E1002 19:46:19.394165 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:19.408192 kubelet[1443]: E1002 19:46:19.408163 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:19.408394 kubelet[1443]: E1002 19:46:19.408374 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:46:19.429857 kubelet[1443]: E1002 19:46:19.429819 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:20.430144 kubelet[1443]: E1002 19:46:20.430114 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:21.430788 kubelet[1443]: E1002 19:46:21.430762 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:22.431945 kubelet[1443]: E1002 19:46:22.431895 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:23.432784 kubelet[1443]: E1002 19:46:23.432752 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:24.395806 kubelet[1443]: E1002 19:46:24.395781 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:24.433601 kubelet[1443]: E1002 19:46:24.433571 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:25.434811 kubelet[1443]: E1002 19:46:25.434767 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:26.435645 kubelet[1443]: E1002 19:46:26.435598 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:27.435818 kubelet[1443]: E1002 19:46:27.435770 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:28.436330 kubelet[1443]: E1002 19:46:28.436289 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:29.305562 kubelet[1443]: E1002 19:46:29.305502 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:29.397318 kubelet[1443]: E1002 19:46:29.397281 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:29.437132 kubelet[1443]: E1002 19:46:29.437100 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:30.407632 kubelet[1443]: E1002 19:46:30.407593 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:30.407829 kubelet[1443]: E1002 19:46:30.407810 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wlkps_kube-system(b1ed0e1a-951e-4981-a148-7f66eae3559e)\"" pod="kube-system/cilium-wlkps" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" Oct 2 19:46:30.438117 kubelet[1443]: E1002 19:46:30.438089 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:31.439448 kubelet[1443]: E1002 19:46:31.439400 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:32.440549 kubelet[1443]: E1002 19:46:32.440514 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:33.441508 kubelet[1443]: E1002 19:46:33.441465 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:34.398736 kubelet[1443]: E1002 19:46:34.398706 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:34.442368 kubelet[1443]: E1002 19:46:34.442333 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:35.443079 kubelet[1443]: E1002 19:46:35.443030 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:36.443904 kubelet[1443]: E1002 19:46:36.443860 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:37.444027 kubelet[1443]: E1002 19:46:37.443980 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:37.751400 env[1142]: time="2023-10-02T19:46:37.751284793Z" level=info msg="StopPodSandbox for \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\"" Oct 2 19:46:37.751400 env[1142]: time="2023-10-02T19:46:37.751360553Z" level=info msg="Container to stop \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:46:37.752543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4-shm.mount: Deactivated successfully. Oct 2 19:46:37.759067 systemd[1]: cri-containerd-0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4.scope: Deactivated successfully. Oct 2 19:46:37.760518 kernel: kauditd_printk_skb: 186 callbacks suppressed Oct 2 19:46:37.760591 kernel: audit: type=1334 audit(1696275997.757:638): prog-id=61 op=UNLOAD Oct 2 19:46:37.757000 audit: BPF prog-id=61 op=UNLOAD Oct 2 19:46:37.765000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:46:37.767461 kernel: audit: type=1334 audit(1696275997.765:639): prog-id=65 op=UNLOAD Oct 2 19:46:37.783214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4-rootfs.mount: Deactivated successfully. Oct 2 19:46:37.788876 env[1142]: time="2023-10-02T19:46:37.788816638Z" level=info msg="shim disconnected" id=0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4 Oct 2 19:46:37.788876 env[1142]: time="2023-10-02T19:46:37.788865278Z" level=warning msg="cleaning up after shim disconnected" id=0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4 namespace=k8s.io Oct 2 19:46:37.788876 env[1142]: time="2023-10-02T19:46:37.788874878Z" level=info msg="cleaning up dead shim" Oct 2 19:46:37.798413 env[1142]: time="2023-10-02T19:46:37.798353710Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1983 runtime=io.containerd.runc.v2\n" Oct 2 19:46:37.798722 env[1142]: time="2023-10-02T19:46:37.798685391Z" level=info msg="TearDown network for sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" successfully" Oct 2 19:46:37.798722 env[1142]: time="2023-10-02T19:46:37.798713751Z" level=info msg="StopPodSandbox for \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" returns successfully" Oct 2 19:46:37.896069 kubelet[1443]: I1002 19:46:37.896031 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-hostproc\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896098 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-bpf-maps\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896122 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-hubble-tls\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896040 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-hostproc" (OuterVolumeSpecName: "hostproc") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896152 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-etc-cni-netd\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896174 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-kernel\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896176 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896193 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-net\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896196 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896210 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-xtables-lock\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896241 kubelet[1443]: I1002 19:46:37.896237 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-cgroup\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896255 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-lib-modules\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896276 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1ed0e1a-951e-4981-a148-7f66eae3559e-clustermesh-secrets\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896301 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cni-path\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896321 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7ms9\" (UniqueName: \"kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-kube-api-access-d7ms9\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896341 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-config-path\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896356 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-run\") pod \"b1ed0e1a-951e-4981-a148-7f66eae3559e\" (UID: \"b1ed0e1a-951e-4981-a148-7f66eae3559e\") " Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896387 1443 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896397 1443 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896407 1443 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896456 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896501 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896518 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896603 kubelet[1443]: I1002 19:46:37.896547 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cni-path" (OuterVolumeSpecName: "cni-path") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896998 kubelet[1443]: I1002 19:46:37.896798 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.896998 kubelet[1443]: I1002 19:46:37.896823 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.897138 kubelet[1443]: I1002 19:46:37.897088 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:37.898704 kubelet[1443]: I1002 19:46:37.898635 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:46:37.899956 systemd[1]: var-lib-kubelet-pods-b1ed0e1a\x2d951e\x2d4981\x2da148\x2d7f66eae3559e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:46:37.900051 systemd[1]: var-lib-kubelet-pods-b1ed0e1a\x2d951e\x2d4981\x2da148\x2d7f66eae3559e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:46:37.901461 systemd[1]: var-lib-kubelet-pods-b1ed0e1a\x2d951e\x2d4981\x2da148\x2d7f66eae3559e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7ms9.mount: Deactivated successfully. Oct 2 19:46:37.901573 kubelet[1443]: I1002 19:46:37.901432 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ed0e1a-951e-4981-a148-7f66eae3559e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:46:37.901915 kubelet[1443]: I1002 19:46:37.901891 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:46:37.902192 kubelet[1443]: I1002 19:46:37.902171 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-kube-api-access-d7ms9" (OuterVolumeSpecName: "kube-api-access-d7ms9") pod "b1ed0e1a-951e-4981-a148-7f66eae3559e" (UID: "b1ed0e1a-951e-4981-a148-7f66eae3559e"). InnerVolumeSpecName "kube-api-access-d7ms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:46:37.996623 kubelet[1443]: I1002 19:46:37.996573 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996623 kubelet[1443]: I1002 19:46:37.996616 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996623 kubelet[1443]: I1002 19:46:37.996628 1443 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996639 1443 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d7ms9\" (UniqueName: \"kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-kube-api-access-d7ms9\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996653 1443 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996662 1443 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996672 1443 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1ed0e1a-951e-4981-a148-7f66eae3559e-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996681 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996690 1443 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996702 1443 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1ed0e1a-951e-4981-a148-7f66eae3559e-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:37.996817 kubelet[1443]: I1002 19:46:37.996712 1443 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1ed0e1a-951e-4981-a148-7f66eae3559e-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:46:38.444431 kubelet[1443]: E1002 19:46:38.444359 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:38.702979 kubelet[1443]: I1002 19:46:38.702873 1443 scope.go:117] "RemoveContainer" containerID="915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5" Oct 2 19:46:38.704479 env[1142]: time="2023-10-02T19:46:38.704419081Z" level=info msg="RemoveContainer for \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\"" Oct 2 19:46:38.706600 systemd[1]: Removed slice kubepods-burstable-podb1ed0e1a_951e_4981_a148_7f66eae3559e.slice. Oct 2 19:46:38.708624 env[1142]: time="2023-10-02T19:46:38.708132173Z" level=info msg="RemoveContainer for \"915334072368fe61839a87e18330ecf778b3b37caaec8a64ed5b45c1ecd19ba5\" returns successfully" Oct 2 19:46:39.400148 kubelet[1443]: E1002 19:46:39.400092 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:39.410194 kubelet[1443]: I1002 19:46:39.410162 1443 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" path="/var/lib/kubelet/pods/b1ed0e1a-951e-4981-a148-7f66eae3559e/volumes" Oct 2 19:46:39.444554 kubelet[1443]: E1002 19:46:39.444506 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:40.445514 kubelet[1443]: E1002 19:46:40.445433 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:40.861578 kubelet[1443]: I1002 19:46:40.861541 1443 topology_manager.go:215] "Topology Admit Handler" podUID="acb31958-de27-449d-a139-3b5783a29c9c" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-5vmmn" Oct 2 19:46:40.861805 kubelet[1443]: E1002 19:46:40.861789 1443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.861867 kubelet[1443]: E1002 19:46:40.861859 1443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.861920 kubelet[1443]: E1002 19:46:40.861912 1443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.861974 kubelet[1443]: E1002 19:46:40.861965 1443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.862039 kubelet[1443]: I1002 19:46:40.862030 1443 memory_manager.go:346] "RemoveStaleState removing state" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.862092 kubelet[1443]: I1002 19:46:40.862084 1443 memory_manager.go:346] "RemoveStaleState removing state" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.862144 kubelet[1443]: I1002 19:46:40.862135 1443 memory_manager.go:346] "RemoveStaleState removing state" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.862200 kubelet[1443]: I1002 19:46:40.862192 1443 memory_manager.go:346] "RemoveStaleState removing state" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.867132 kubelet[1443]: I1002 19:46:40.867028 1443 topology_manager.go:215] "Topology Admit Handler" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" podNamespace="kube-system" podName="cilium-hp27b" Oct 2 19:46:40.867132 kubelet[1443]: E1002 19:46:40.867071 1443 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.867132 kubelet[1443]: I1002 19:46:40.867103 1443 memory_manager.go:346] "RemoveStaleState removing state" podUID="b1ed0e1a-951e-4981-a148-7f66eae3559e" containerName="mount-cgroup" Oct 2 19:46:40.869686 systemd[1]: Created slice kubepods-besteffort-podacb31958_de27_449d_a139_3b5783a29c9c.slice. Oct 2 19:46:40.874246 systemd[1]: Created slice kubepods-burstable-podc36e9a16_e4fb_42e8_9248_de5972ff8c8a.slice. Oct 2 19:46:40.913131 kubelet[1443]: I1002 19:46:40.913101 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sns\" (UniqueName: \"kubernetes.io/projected/acb31958-de27-449d-a139-3b5783a29c9c-kube-api-access-l9sns\") pod \"cilium-operator-6bc8ccdb58-5vmmn\" (UID: \"acb31958-de27-449d-a139-3b5783a29c9c\") " pod="kube-system/cilium-operator-6bc8ccdb58-5vmmn" Oct 2 19:46:40.913131 kubelet[1443]: I1002 19:46:40.913141 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hostproc\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913317 kubelet[1443]: I1002 19:46:40.913164 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-clustermesh-secrets\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913317 kubelet[1443]: I1002 19:46:40.913214 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzdl9\" (UniqueName: \"kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-kube-api-access-jzdl9\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913317 kubelet[1443]: I1002 19:46:40.913275 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-etc-cni-netd\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913461 kubelet[1443]: I1002 19:46:40.913329 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-lib-modules\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913461 kubelet[1443]: I1002 19:46:40.913353 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acb31958-de27-449d-a139-3b5783a29c9c-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-5vmmn\" (UID: \"acb31958-de27-449d-a139-3b5783a29c9c\") " pod="kube-system/cilium-operator-6bc8ccdb58-5vmmn" Oct 2 19:46:40.913461 kubelet[1443]: I1002 19:46:40.913372 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-run\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913461 kubelet[1443]: I1002 19:46:40.913390 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-bpf-maps\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913461 kubelet[1443]: I1002 19:46:40.913412 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cni-path\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913461 kubelet[1443]: I1002 19:46:40.913432 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-net\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913615 kubelet[1443]: I1002 19:46:40.913473 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-kernel\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913615 kubelet[1443]: I1002 19:46:40.913510 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hubble-tls\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913615 kubelet[1443]: I1002 19:46:40.913534 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-cgroup\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913615 kubelet[1443]: I1002 19:46:40.913566 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-xtables-lock\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913615 kubelet[1443]: I1002 19:46:40.913592 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-config-path\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:40.913726 kubelet[1443]: I1002 19:46:40.913618 1443 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-ipsec-secrets\") pod \"cilium-hp27b\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " pod="kube-system/cilium-hp27b" Oct 2 19:46:41.172455 kubelet[1443]: E1002 19:46:41.172307 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:41.173172 env[1142]: time="2023-10-02T19:46:41.173052896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-5vmmn,Uid:acb31958-de27-449d-a139-3b5783a29c9c,Namespace:kube-system,Attempt:0,}" Oct 2 19:46:41.183329 kubelet[1443]: E1002 19:46:41.183301 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:41.184074 env[1142]: time="2023-10-02T19:46:41.183731731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hp27b,Uid:c36e9a16-e4fb-42e8-9248-de5972ff8c8a,Namespace:kube-system,Attempt:0,}" Oct 2 19:46:41.190718 env[1142]: time="2023-10-02T19:46:41.190647594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:46:41.190718 env[1142]: time="2023-10-02T19:46:41.190692834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:46:41.190858 env[1142]: time="2023-10-02T19:46:41.190713634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:46:41.190971 env[1142]: time="2023-10-02T19:46:41.190942195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9 pid=2011 runtime=io.containerd.runc.v2 Oct 2 19:46:41.195365 env[1142]: time="2023-10-02T19:46:41.195177089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:46:41.195365 env[1142]: time="2023-10-02T19:46:41.195214169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:46:41.195365 env[1142]: time="2023-10-02T19:46:41.195224609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:46:41.195530 env[1142]: time="2023-10-02T19:46:41.195425449Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1 pid=2028 runtime=io.containerd.runc.v2 Oct 2 19:46:41.203321 systemd[1]: Started cri-containerd-01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9.scope. Oct 2 19:46:41.215216 systemd[1]: Started cri-containerd-05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1.scope. Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.252189 kernel: audit: type=1400 audit(1696276001.245:640): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.252247 kernel: audit: type=1400 audit(1696276001.245:641): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.252264 kernel: audit: type=1400 audit(1696276001.245:642): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.254727 kernel: audit: type=1400 audit(1696276001.245:643): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.255128 kernel: audit: type=1400 audit(1696276001.245:644): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.258207 kernel: audit: type=1400 audit(1696276001.245:645): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.260949 kernel: audit: type=1400 audit(1696276001.245:646): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.260994 kernel: audit: type=1400 audit(1696276001.245:647): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.248000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.248000 audit: BPF prog-id=72 op=LOAD Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000147b38 a2=10 a3=0 items=0 ppid=2028 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035623162376664383663313365623966626663386237356262343336 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001475a0 a2=3c a3=0 items=0 ppid=2028 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035623162376664383663313365623966626663386237356262343336 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.251000 audit: BPF prog-id=73 op=LOAD Oct 2 19:46:41.251000 audit[2047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001478e0 a2=78 a3=0 items=0 ppid=2028 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035623162376664383663313365623966626663386237356262343336 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit: BPF prog-id=74 op=LOAD Oct 2 19:46:41.253000 audit[2047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000147670 a2=78 a3=0 items=0 ppid=2028 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035623162376664383663313365623966626663386237356262343336 Oct 2 19:46:41.253000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:46:41.253000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { perfmon } for pid=2047 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit[2047]: AVC avc: denied { bpf } for pid=2047 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.253000 audit: BPF prog-id=75 op=LOAD Oct 2 19:46:41.253000 audit[2047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000147b40 a2=78 a3=0 items=0 ppid=2028 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035623162376664383663313365623966626663386237356262343336 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit: BPF prog-id=76 op=LOAD Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2011 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616463633939393239646465373038303534623230373164643132 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2011 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616463633939393239646465373038303534623230373164643132 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit: BPF prog-id=77 op=LOAD Oct 2 19:46:41.274000 audit[2029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2011 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616463633939393239646465373038303534623230373164643132 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit: BPF prog-id=78 op=LOAD Oct 2 19:46:41.274000 audit[2029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2011 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616463633939393239646465373038303534623230373164643132 Oct 2 19:46:41.274000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:46:41.274000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { perfmon } for pid=2029 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit[2029]: AVC avc: denied { bpf } for pid=2029 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:41.274000 audit: BPF prog-id=79 op=LOAD Oct 2 19:46:41.274000 audit[2029]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2011 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:41.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031616463633939393239646465373038303534623230373164643132 Oct 2 19:46:41.282979 env[1142]: time="2023-10-02T19:46:41.282927178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hp27b,Uid:c36e9a16-e4fb-42e8-9248-de5972ff8c8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\"" Oct 2 19:46:41.283700 kubelet[1443]: E1002 19:46:41.283677 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:41.285677 env[1142]: time="2023-10-02T19:46:41.285641827Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:46:41.297186 env[1142]: time="2023-10-02T19:46:41.297135665Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\"" Oct 2 19:46:41.297619 env[1142]: time="2023-10-02T19:46:41.297590187Z" level=info msg="StartContainer for \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\"" Oct 2 19:46:41.304183 env[1142]: time="2023-10-02T19:46:41.304146288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-5vmmn,Uid:acb31958-de27-449d-a139-3b5783a29c9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9\"" Oct 2 19:46:41.304889 kubelet[1443]: E1002 19:46:41.304861 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:41.305875 env[1142]: time="2023-10-02T19:46:41.305836214Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:46:41.314728 systemd[1]: Started cri-containerd-c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40.scope. Oct 2 19:46:41.343555 systemd[1]: cri-containerd-c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40.scope: Deactivated successfully. Oct 2 19:46:41.357417 env[1142]: time="2023-10-02T19:46:41.357366384Z" level=info msg="shim disconnected" id=c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40 Oct 2 19:46:41.357417 env[1142]: time="2023-10-02T19:46:41.357419864Z" level=warning msg="cleaning up after shim disconnected" id=c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40 namespace=k8s.io Oct 2 19:46:41.357640 env[1142]: time="2023-10-02T19:46:41.357429344Z" level=info msg="cleaning up dead shim" Oct 2 19:46:41.366054 env[1142]: time="2023-10-02T19:46:41.366009012Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2112 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:41.366297 env[1142]: time="2023-10-02T19:46:41.366250653Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Oct 2 19:46:41.366491 env[1142]: time="2023-10-02T19:46:41.366425894Z" level=error msg="Failed to pipe stdout of container \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\"" error="reading from a closed fifo" Oct 2 19:46:41.366643 env[1142]: time="2023-10-02T19:46:41.366589614Z" level=error msg="Failed to pipe stderr of container \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\"" error="reading from a closed fifo" Oct 2 19:46:41.368254 env[1142]: time="2023-10-02T19:46:41.368208860Z" level=error msg="StartContainer for \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:41.368562 kubelet[1443]: E1002 19:46:41.368525 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40" Oct 2 19:46:41.368944 kubelet[1443]: E1002 19:46:41.368915 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:41.368944 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:41.368944 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:46:41.368944 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzdl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:41.369098 kubelet[1443]: E1002 19:46:41.368975 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:46:41.446544 kubelet[1443]: E1002 19:46:41.446425 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:41.710002 kubelet[1443]: E1002 19:46:41.709910 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:41.712613 env[1142]: time="2023-10-02T19:46:41.712575116Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:46:41.722170 env[1142]: time="2023-10-02T19:46:41.722124908Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\"" Oct 2 19:46:41.722729 env[1142]: time="2023-10-02T19:46:41.722698630Z" level=info msg="StartContainer for \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\"" Oct 2 19:46:41.737427 systemd[1]: Started cri-containerd-dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233.scope. Oct 2 19:46:41.755640 systemd[1]: cri-containerd-dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233.scope: Deactivated successfully. Oct 2 19:46:41.763834 env[1142]: time="2023-10-02T19:46:41.763790085Z" level=info msg="shim disconnected" id=dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233 Oct 2 19:46:41.764065 env[1142]: time="2023-10-02T19:46:41.764033726Z" level=warning msg="cleaning up after shim disconnected" id=dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233 namespace=k8s.io Oct 2 19:46:41.764136 env[1142]: time="2023-10-02T19:46:41.764122566Z" level=info msg="cleaning up dead shim" Oct 2 19:46:41.772769 env[1142]: time="2023-10-02T19:46:41.772728515Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2149 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:41.773138 env[1142]: time="2023-10-02T19:46:41.773087116Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 19:46:41.773547 env[1142]: time="2023-10-02T19:46:41.773502157Z" level=error msg="Failed to pipe stdout of container \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\"" error="reading from a closed fifo" Oct 2 19:46:41.773681 env[1142]: time="2023-10-02T19:46:41.773649798Z" level=error msg="Failed to pipe stderr of container \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\"" error="reading from a closed fifo" Oct 2 19:46:41.775352 env[1142]: time="2023-10-02T19:46:41.775318283Z" level=error msg="StartContainer for \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:41.775699 kubelet[1443]: E1002 19:46:41.775666 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233" Oct 2 19:46:41.775789 kubelet[1443]: E1002 19:46:41.775768 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:41.775789 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:41.775789 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:46:41.775789 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzdl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:41.775923 kubelet[1443]: E1002 19:46:41.775810 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:46:42.342346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469897462.mount: Deactivated successfully. Oct 2 19:46:42.447063 kubelet[1443]: E1002 19:46:42.447018 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:42.714569 kubelet[1443]: I1002 19:46:42.714545 1443 scope.go:117] "RemoveContainer" containerID="c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40" Oct 2 19:46:42.714923 kubelet[1443]: I1002 19:46:42.714906 1443 scope.go:117] "RemoveContainer" containerID="c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40" Oct 2 19:46:42.716175 env[1142]: time="2023-10-02T19:46:42.716142144Z" level=info msg="RemoveContainer for \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\"" Oct 2 19:46:42.716563 env[1142]: time="2023-10-02T19:46:42.716376185Z" level=info msg="RemoveContainer for \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\"" Oct 2 19:46:42.716867 env[1142]: time="2023-10-02T19:46:42.716824666Z" level=error msg="RemoveContainer for \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\" failed" error="failed to set removing state for container \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\": container is already in removing state" Oct 2 19:46:42.717215 kubelet[1443]: E1002 19:46:42.717193 1443 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\": container is already in removing state" containerID="c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40" Oct 2 19:46:42.717306 kubelet[1443]: I1002 19:46:42.717293 1443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40"} err="rpc error: code = Unknown desc = failed to set removing state for container \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\": container is already in removing state" Oct 2 19:46:42.718970 env[1142]: time="2023-10-02T19:46:42.718938113Z" level=info msg="RemoveContainer for \"c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40\" returns successfully" Oct 2 19:46:42.719242 kubelet[1443]: E1002 19:46:42.719221 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:42.719448 kubelet[1443]: E1002 19:46:42.719428 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:46:42.794311 env[1142]: time="2023-10-02T19:46:42.794266121Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:46:42.795305 env[1142]: time="2023-10-02T19:46:42.795266885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:46:42.796648 env[1142]: time="2023-10-02T19:46:42.796620409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:46:42.797169 env[1142]: time="2023-10-02T19:46:42.797144771Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:46:42.799196 env[1142]: time="2023-10-02T19:46:42.799160777Z" level=info msg="CreateContainer within sandbox \"01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:46:42.807595 env[1142]: time="2023-10-02T19:46:42.807554805Z" level=info msg="CreateContainer within sandbox \"01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\"" Oct 2 19:46:42.808035 env[1142]: time="2023-10-02T19:46:42.807994327Z" level=info msg="StartContainer for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\"" Oct 2 19:46:42.822688 systemd[1]: Started cri-containerd-3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f.scope. Oct 2 19:46:42.845818 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:46:42.845923 kernel: audit: type=1400 audit(1696276002.843:676): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848869 kernel: audit: type=1400 audit(1696276002.843:677): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849004 kernel: audit: type=1400 audit(1696276002.843:678): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850467 kernel: audit: type=1400 audit(1696276002.843:679): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.853745 kernel: audit: type=1400 audit(1696276002.843:680): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.853810 kernel: audit: type=1400 audit(1696276002.843:681): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.855379 kernel: audit: type=1400 audit(1696276002.843:682): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.858720 kernel: audit: type=1400 audit(1696276002.843:683): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.858773 kernel: audit: type=1400 audit(1696276002.843:684): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.860297 kernel: audit: type=1400 audit(1696276002.844:685): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.844000 audit: BPF prog-id=80 op=LOAD Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=2011 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:42.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343564363234346166333338666439396665373763306664343639 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=2011 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:42.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343564363234346166333338666439396665373763306664343639 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.848000 audit: BPF prog-id=81 op=LOAD Oct 2 19:46:42.848000 audit[2170]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=2011 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:42.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343564363234346166333338666439396665373763306664343639 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.849000 audit: BPF prog-id=82 op=LOAD Oct 2 19:46:42.849000 audit[2170]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=2011 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:42.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343564363234346166333338666439396665373763306664343639 Oct 2 19:46:42.850000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:46:42.850000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:42.850000 audit: BPF prog-id=83 op=LOAD Oct 2 19:46:42.850000 audit[2170]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=2011 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:42.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363343564363234346166333338666439396665373763306664343639 Oct 2 19:46:42.876645 env[1142]: time="2023-10-02T19:46:42.876598673Z" level=info msg="StartContainer for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" returns successfully" Oct 2 19:46:42.922000 audit[2181]: AVC avc: denied { map_create } for pid=2181 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c664,c734 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c664,c734 tclass=bpf permissive=0 Oct 2 19:46:42.922000 audit[2181]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=40001b9768 a2=48 a3=0 items=0 ppid=2011 pid=2181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c664,c734 key=(null) Oct 2 19:46:42.922000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:46:43.447841 kubelet[1443]: E1002 19:46:43.447792 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:43.718602 kubelet[1443]: E1002 19:46:43.718501 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:43.727403 kubelet[1443]: I1002 19:46:43.727362 1443 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-5vmmn" podStartSLOduration=2.235513351 podCreationTimestamp="2023-10-02 19:46:40 +0000 UTC" firstStartedPulling="2023-10-02 19:46:41.305550333 +0000 UTC m=+192.952649534" lastFinishedPulling="2023-10-02 19:46:42.797363972 +0000 UTC m=+194.444463173" observedRunningTime="2023-10-02 19:46:43.72707795 +0000 UTC m=+195.374177151" watchObservedRunningTime="2023-10-02 19:46:43.72732699 +0000 UTC m=+195.374426191" Oct 2 19:46:44.401715 kubelet[1443]: E1002 19:46:44.401680 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:44.448403 kubelet[1443]: E1002 19:46:44.448335 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:44.461988 kubelet[1443]: W1002 19:46:44.461944 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc36e9a16_e4fb_42e8_9248_de5972ff8c8a.slice/cri-containerd-c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40.scope WatchSource:0}: container "c60a49415bc230b0bbee5176f184aa45520ab3762557f20919c78c637dcdcf40" in namespace "k8s.io": not found Oct 2 19:46:44.720203 kubelet[1443]: E1002 19:46:44.720112 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:45.449217 kubelet[1443]: E1002 19:46:45.449171 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:46.449560 kubelet[1443]: E1002 19:46:46.449520 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:47.450058 kubelet[1443]: E1002 19:46:47.450011 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:47.569193 kubelet[1443]: W1002 19:46:47.569161 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc36e9a16_e4fb_42e8_9248_de5972ff8c8a.slice/cri-containerd-dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233.scope WatchSource:0}: task dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233 not found: not found Oct 2 19:46:48.450486 kubelet[1443]: E1002 19:46:48.450418 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:49.304820 kubelet[1443]: E1002 19:46:49.304778 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:49.402386 kubelet[1443]: E1002 19:46:49.402361 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:49.451580 kubelet[1443]: E1002 19:46:49.451543 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:50.452645 kubelet[1443]: E1002 19:46:50.452607 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:51.453304 kubelet[1443]: E1002 19:46:51.453246 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:52.453484 kubelet[1443]: E1002 19:46:52.453419 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:53.453961 kubelet[1443]: E1002 19:46:53.453931 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:54.403748 kubelet[1443]: E1002 19:46:54.403708 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:54.408199 kubelet[1443]: E1002 19:46:54.408171 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:54.410206 env[1142]: time="2023-10-02T19:46:54.410171581Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:46:54.419757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440537539.mount: Deactivated successfully. Oct 2 19:46:54.422359 env[1142]: time="2023-10-02T19:46:54.422317541Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\"" Oct 2 19:46:54.423067 env[1142]: time="2023-10-02T19:46:54.423018623Z" level=info msg="StartContainer for \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\"" Oct 2 19:46:54.441253 systemd[1]: run-containerd-runc-k8s.io-7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1-runc.EjQy4y.mount: Deactivated successfully. Oct 2 19:46:54.442502 systemd[1]: Started cri-containerd-7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1.scope. Oct 2 19:46:54.455931 kubelet[1443]: E1002 19:46:54.454638 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:54.462038 systemd[1]: cri-containerd-7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1.scope: Deactivated successfully. Oct 2 19:46:54.578421 env[1142]: time="2023-10-02T19:46:54.578364125Z" level=info msg="shim disconnected" id=7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1 Oct 2 19:46:54.578633 env[1142]: time="2023-10-02T19:46:54.578450285Z" level=warning msg="cleaning up after shim disconnected" id=7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1 namespace=k8s.io Oct 2 19:46:54.578633 env[1142]: time="2023-10-02T19:46:54.578463485Z" level=info msg="cleaning up dead shim" Oct 2 19:46:54.586631 env[1142]: time="2023-10-02T19:46:54.586568391Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2228 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:54.586879 env[1142]: time="2023-10-02T19:46:54.586830752Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:46:54.587049 env[1142]: time="2023-10-02T19:46:54.587005553Z" level=error msg="Failed to pipe stdout of container \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\"" error="reading from a closed fifo" Oct 2 19:46:54.588587 env[1142]: time="2023-10-02T19:46:54.588288557Z" level=error msg="Failed to pipe stderr of container \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\"" error="reading from a closed fifo" Oct 2 19:46:54.589863 env[1142]: time="2023-10-02T19:46:54.589822082Z" level=error msg="StartContainer for \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:54.590148 kubelet[1443]: E1002 19:46:54.590117 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1" Oct 2 19:46:54.590307 kubelet[1443]: E1002 19:46:54.590283 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:54.590307 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:54.590307 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:46:54.590307 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzdl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:54.590468 kubelet[1443]: E1002 19:46:54.590362 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:46:54.738186 kubelet[1443]: I1002 19:46:54.737371 1443 scope.go:117] "RemoveContainer" containerID="dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233" Oct 2 19:46:54.738405 kubelet[1443]: I1002 19:46:54.738378 1443 scope.go:117] "RemoveContainer" containerID="dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233" Oct 2 19:46:54.740029 env[1142]: time="2023-10-02T19:46:54.739993247Z" level=info msg="RemoveContainer for \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\"" Oct 2 19:46:54.742543 env[1142]: time="2023-10-02T19:46:54.742507615Z" level=info msg="RemoveContainer for \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\"" Oct 2 19:46:54.742655 env[1142]: time="2023-10-02T19:46:54.742586015Z" level=error msg="RemoveContainer for \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\" failed" error="rpc error: code = NotFound desc = get container info: container \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\" in namespace \"k8s.io\": not found" Oct 2 19:46:54.742857 kubelet[1443]: E1002 19:46:54.742813 1443 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\" in namespace \"k8s.io\": not found" containerID="dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233" Oct 2 19:46:54.742917 kubelet[1443]: E1002 19:46:54.742868 1443 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233" in namespace "k8s.io": not found; Skipping pod "cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)" Oct 2 19:46:54.742972 kubelet[1443]: E1002 19:46:54.742936 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:54.743215 kubelet[1443]: E1002 19:46:54.743191 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:46:54.744814 env[1142]: time="2023-10-02T19:46:54.744773382Z" level=info msg="RemoveContainer for \"dd33d9ae3d084942b73250ff7ec3a2c2fb7e5a5b9ff8b2e39eadbb206c3eb233\" returns successfully" Oct 2 19:46:55.417963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1-rootfs.mount: Deactivated successfully. Oct 2 19:46:55.455327 kubelet[1443]: E1002 19:46:55.455288 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:56.455796 kubelet[1443]: E1002 19:46:56.455751 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:57.456511 kubelet[1443]: E1002 19:46:57.456476 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:57.684357 kubelet[1443]: W1002 19:46:57.684323 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc36e9a16_e4fb_42e8_9248_de5972ff8c8a.slice/cri-containerd-7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1.scope WatchSource:0}: task 7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1 not found: not found Oct 2 19:46:58.457319 kubelet[1443]: E1002 19:46:58.457284 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:59.404806 kubelet[1443]: E1002 19:46:59.404779 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:59.458183 kubelet[1443]: E1002 19:46:59.458153 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:00.458743 kubelet[1443]: E1002 19:47:00.458704 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:01.459526 kubelet[1443]: E1002 19:47:01.459497 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:02.460888 kubelet[1443]: E1002 19:47:02.460840 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:03.461166 kubelet[1443]: E1002 19:47:03.461134 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:04.406016 kubelet[1443]: E1002 19:47:04.405991 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:04.462601 kubelet[1443]: E1002 19:47:04.462565 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:05.462731 kubelet[1443]: E1002 19:47:05.462674 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:06.463449 kubelet[1443]: E1002 19:47:06.463410 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:07.464135 kubelet[1443]: E1002 19:47:07.464086 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:08.464568 kubelet[1443]: E1002 19:47:08.464521 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:09.305253 kubelet[1443]: E1002 19:47:09.305220 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:09.408114 kubelet[1443]: E1002 19:47:09.407419 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:09.408114 kubelet[1443]: E1002 19:47:09.407682 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:47:09.408552 kubelet[1443]: E1002 19:47:09.408528 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:09.464842 kubelet[1443]: E1002 19:47:09.464807 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:10.465699 kubelet[1443]: E1002 19:47:10.465657 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:11.466340 kubelet[1443]: E1002 19:47:11.466308 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:12.466822 kubelet[1443]: E1002 19:47:12.466780 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:13.467465 kubelet[1443]: E1002 19:47:13.467417 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:14.409278 kubelet[1443]: E1002 19:47:14.409244 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:14.468477 kubelet[1443]: E1002 19:47:14.468426 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:15.468965 kubelet[1443]: E1002 19:47:15.468920 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:16.469988 kubelet[1443]: E1002 19:47:16.469950 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:17.470213 kubelet[1443]: E1002 19:47:17.470156 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:18.470368 kubelet[1443]: E1002 19:47:18.470306 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:19.410645 kubelet[1443]: E1002 19:47:19.410621 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:19.471013 kubelet[1443]: E1002 19:47:19.470982 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:20.472107 kubelet[1443]: E1002 19:47:20.472046 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:21.407661 kubelet[1443]: E1002 19:47:21.407627 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:21.473127 kubelet[1443]: E1002 19:47:21.473095 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:22.407416 kubelet[1443]: E1002 19:47:22.407337 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:22.409558 env[1142]: time="2023-10-02T19:47:22.409432931Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:47:22.419845 env[1142]: time="2023-10-02T19:47:22.419764899Z" level=info msg="CreateContainer within sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\"" Oct 2 19:47:22.420165 env[1142]: time="2023-10-02T19:47:22.420142301Z" level=info msg="StartContainer for \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\"" Oct 2 19:47:22.439928 systemd[1]: Started cri-containerd-697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd.scope. Oct 2 19:47:22.471145 systemd[1]: cri-containerd-697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd.scope: Deactivated successfully. Oct 2 19:47:22.474587 kubelet[1443]: E1002 19:47:22.474343 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:22.474427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd-rootfs.mount: Deactivated successfully. Oct 2 19:47:22.481241 env[1142]: time="2023-10-02T19:47:22.481196144Z" level=info msg="shim disconnected" id=697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd Oct 2 19:47:22.481423 env[1142]: time="2023-10-02T19:47:22.481405585Z" level=warning msg="cleaning up after shim disconnected" id=697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd namespace=k8s.io Oct 2 19:47:22.481549 env[1142]: time="2023-10-02T19:47:22.481534345Z" level=info msg="cleaning up dead shim" Oct 2 19:47:22.490338 env[1142]: time="2023-10-02T19:47:22.490294986Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2269 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:47:22.490750 env[1142]: time="2023-10-02T19:47:22.490694508Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 19:47:22.494758 env[1142]: time="2023-10-02T19:47:22.491511111Z" level=error msg="Failed to pipe stderr of container \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\"" error="reading from a closed fifo" Oct 2 19:47:22.494907 env[1142]: time="2023-10-02T19:47:22.494528245Z" level=error msg="Failed to pipe stdout of container \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\"" error="reading from a closed fifo" Oct 2 19:47:22.496212 env[1142]: time="2023-10-02T19:47:22.496171653Z" level=error msg="StartContainer for \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:47:22.496548 kubelet[1443]: E1002 19:47:22.496519 1443 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd" Oct 2 19:47:22.496656 kubelet[1443]: E1002 19:47:22.496637 1443 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:47:22.496656 kubelet[1443]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:47:22.496656 kubelet[1443]: rm /hostbin/cilium-mount Oct 2 19:47:22.496656 kubelet[1443]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzdl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:47:22.496805 kubelet[1443]: E1002 19:47:22.496677 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:47:22.790075 kubelet[1443]: I1002 19:47:22.789985 1443 scope.go:117] "RemoveContainer" containerID="7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1" Oct 2 19:47:22.790953 kubelet[1443]: I1002 19:47:22.790923 1443 scope.go:117] "RemoveContainer" containerID="7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1" Oct 2 19:47:22.793418 env[1142]: time="2023-10-02T19:47:22.793366548Z" level=info msg="RemoveContainer for \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\"" Oct 2 19:47:22.793661 env[1142]: time="2023-10-02T19:47:22.793628390Z" level=info msg="RemoveContainer for \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\"" Oct 2 19:47:22.793744 env[1142]: time="2023-10-02T19:47:22.793713430Z" level=error msg="RemoveContainer for \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\" failed" error="failed to set removing state for container \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\": container is already in removing state" Oct 2 19:47:22.793928 kubelet[1443]: E1002 19:47:22.793901 1443 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\": container is already in removing state" containerID="7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1" Oct 2 19:47:22.794031 kubelet[1443]: E1002 19:47:22.793933 1443 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1": container is already in removing state; Skipping pod "cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)" Oct 2 19:47:22.794031 kubelet[1443]: E1002 19:47:22.793992 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:22.794219 kubelet[1443]: E1002 19:47:22.794196 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:47:22.797174 env[1142]: time="2023-10-02T19:47:22.797142446Z" level=info msg="RemoveContainer for \"7af2c0c5b02fe30645a7e554cdc81b0bdf8530793a66cf3f8b32c2d70862f3b1\" returns successfully" Oct 2 19:47:23.475539 kubelet[1443]: E1002 19:47:23.475493 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:24.412314 kubelet[1443]: E1002 19:47:24.412278 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:24.475903 kubelet[1443]: E1002 19:47:24.475856 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:25.477019 kubelet[1443]: E1002 19:47:25.476943 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:25.586505 kubelet[1443]: W1002 19:47:25.586455 1443 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc36e9a16_e4fb_42e8_9248_de5972ff8c8a.slice/cri-containerd-697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd.scope WatchSource:0}: task 697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd not found: not found Oct 2 19:47:26.477359 kubelet[1443]: E1002 19:47:26.477287 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:27.477699 kubelet[1443]: E1002 19:47:27.477624 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:28.478076 kubelet[1443]: E1002 19:47:28.478019 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:29.305189 kubelet[1443]: E1002 19:47:29.305126 1443 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:29.320284 env[1142]: time="2023-10-02T19:47:29.320245566Z" level=info msg="StopPodSandbox for \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\"" Oct 2 19:47:29.320592 env[1142]: time="2023-10-02T19:47:29.320329486Z" level=info msg="TearDown network for sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" successfully" Oct 2 19:47:29.320592 env[1142]: time="2023-10-02T19:47:29.320373446Z" level=info msg="StopPodSandbox for \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" returns successfully" Oct 2 19:47:29.320845 env[1142]: time="2023-10-02T19:47:29.320817928Z" level=info msg="RemovePodSandbox for \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\"" Oct 2 19:47:29.320957 env[1142]: time="2023-10-02T19:47:29.320924729Z" level=info msg="Forcibly stopping sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\"" Oct 2 19:47:29.321060 env[1142]: time="2023-10-02T19:47:29.321042289Z" level=info msg="TearDown network for sandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" successfully" Oct 2 19:47:29.323853 env[1142]: time="2023-10-02T19:47:29.323824302Z" level=info msg="RemovePodSandbox \"0fdf4cd53816703871dc511dee0a04b212d320a92ad90bb0fba6ae2eee0628f4\" returns successfully" Oct 2 19:47:29.412770 kubelet[1443]: E1002 19:47:29.412741 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:29.478725 kubelet[1443]: E1002 19:47:29.478629 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:30.479716 kubelet[1443]: E1002 19:47:30.479662 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:31.480004 kubelet[1443]: E1002 19:47:31.479939 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:32.480662 kubelet[1443]: E1002 19:47:32.480584 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:33.481316 kubelet[1443]: E1002 19:47:33.481268 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:34.407808 kubelet[1443]: E1002 19:47:34.407615 1443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:34.407966 kubelet[1443]: E1002 19:47:34.407837 1443 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-hp27b_kube-system(c36e9a16-e4fb-42e8-9248-de5972ff8c8a)\"" pod="kube-system/cilium-hp27b" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" Oct 2 19:47:34.413885 kubelet[1443]: E1002 19:47:34.413865 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:34.482217 kubelet[1443]: E1002 19:47:34.482183 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:35.483481 kubelet[1443]: E1002 19:47:35.483418 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:36.484403 kubelet[1443]: E1002 19:47:36.484367 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:37.485901 kubelet[1443]: E1002 19:47:37.485866 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:38.487213 kubelet[1443]: E1002 19:47:38.487144 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:39.414883 kubelet[1443]: E1002 19:47:39.414844 1443 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:39.487299 kubelet[1443]: E1002 19:47:39.487264 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:40.488153 kubelet[1443]: E1002 19:47:40.488106 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:41.488626 kubelet[1443]: E1002 19:47:41.488589 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:42.019629 env[1142]: time="2023-10-02T19:47:42.019571756Z" level=info msg="StopPodSandbox for \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\"" Oct 2 19:47:42.020750 env[1142]: time="2023-10-02T19:47:42.019657837Z" level=info msg="Container to stop \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:47:42.020909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1-shm.mount: Deactivated successfully. Oct 2 19:47:42.027190 systemd[1]: cri-containerd-05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1.scope: Deactivated successfully. Oct 2 19:47:42.027000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:47:42.028593 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:47:42.028640 kernel: audit: type=1334 audit(1696276062.027:695): prog-id=72 op=UNLOAD Oct 2 19:47:42.034000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:47:42.035641 kernel: audit: type=1334 audit(1696276062.034:696): prog-id=75 op=UNLOAD Oct 2 19:47:42.035887 env[1142]: time="2023-10-02T19:47:42.035836346Z" level=info msg="StopContainer for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" with timeout 30 (s)" Oct 2 19:47:42.036204 env[1142]: time="2023-10-02T19:47:42.036163507Z" level=info msg="Stop container \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" with signal terminated" Oct 2 19:47:42.045642 systemd[1]: cri-containerd-3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f.scope: Deactivated successfully. Oct 2 19:47:42.046000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:47:42.048501 kernel: audit: type=1334 audit(1696276062.046:697): prog-id=80 op=UNLOAD Oct 2 19:47:42.052000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:47:42.054903 kernel: audit: type=1334 audit(1696276062.052:698): prog-id=83 op=UNLOAD Oct 2 19:47:42.062126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1-rootfs.mount: Deactivated successfully. Oct 2 19:47:42.066473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f-rootfs.mount: Deactivated successfully. Oct 2 19:47:42.071675 env[1142]: time="2023-10-02T19:47:42.071621179Z" level=info msg="shim disconnected" id=05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1 Oct 2 19:47:42.071675 env[1142]: time="2023-10-02T19:47:42.071675659Z" level=warning msg="cleaning up after shim disconnected" id=05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1 namespace=k8s.io Oct 2 19:47:42.071864 env[1142]: time="2023-10-02T19:47:42.071686459Z" level=info msg="cleaning up dead shim" Oct 2 19:47:42.071864 env[1142]: time="2023-10-02T19:47:42.071622939Z" level=info msg="shim disconnected" id=3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f Oct 2 19:47:42.071864 env[1142]: time="2023-10-02T19:47:42.071753699Z" level=warning msg="cleaning up after shim disconnected" id=3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f namespace=k8s.io Oct 2 19:47:42.071864 env[1142]: time="2023-10-02T19:47:42.071769099Z" level=info msg="cleaning up dead shim" Oct 2 19:47:42.080236 env[1142]: time="2023-10-02T19:47:42.080191695Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2320 runtime=io.containerd.runc.v2\n" Oct 2 19:47:42.080523 env[1142]: time="2023-10-02T19:47:42.080498856Z" level=info msg="TearDown network for sandbox \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" successfully" Oct 2 19:47:42.080571 env[1142]: time="2023-10-02T19:47:42.080523977Z" level=info msg="StopPodSandbox for \"05b1b7fd86c13eb9fbfc8b75bb43644981372fd48568126544cab1aceff618f1\" returns successfully" Oct 2 19:47:42.082099 env[1142]: time="2023-10-02T19:47:42.081560901Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2321 runtime=io.containerd.runc.v2\n" Oct 2 19:47:42.087233 env[1142]: time="2023-10-02T19:47:42.087205485Z" level=info msg="StopContainer for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" returns successfully" Oct 2 19:47:42.088490 env[1142]: time="2023-10-02T19:47:42.088463970Z" level=info msg="StopPodSandbox for \"01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9\"" Oct 2 19:47:42.088683 env[1142]: time="2023-10-02T19:47:42.088517971Z" level=info msg="Container to stop \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:47:42.089784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9-shm.mount: Deactivated successfully. Oct 2 19:47:42.096000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:47:42.096286 systemd[1]: cri-containerd-01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9.scope: Deactivated successfully. Oct 2 19:47:42.097468 kernel: audit: type=1334 audit(1696276062.096:699): prog-id=76 op=UNLOAD Oct 2 19:47:42.102000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:47:42.103452 kernel: audit: type=1334 audit(1696276062.102:700): prog-id=79 op=UNLOAD Oct 2 19:47:42.113837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9-rootfs.mount: Deactivated successfully. Oct 2 19:47:42.118674 env[1142]: time="2023-10-02T19:47:42.118629299Z" level=info msg="shim disconnected" id=01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9 Oct 2 19:47:42.118820 env[1142]: time="2023-10-02T19:47:42.118676979Z" level=warning msg="cleaning up after shim disconnected" id=01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9 namespace=k8s.io Oct 2 19:47:42.118820 env[1142]: time="2023-10-02T19:47:42.118687939Z" level=info msg="cleaning up dead shim" Oct 2 19:47:42.126550 env[1142]: time="2023-10-02T19:47:42.126513773Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2362 runtime=io.containerd.runc.v2\n" Oct 2 19:47:42.126814 env[1142]: time="2023-10-02T19:47:42.126783734Z" level=info msg="TearDown network for sandbox \"01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9\" successfully" Oct 2 19:47:42.126814 env[1142]: time="2023-10-02T19:47:42.126809014Z" level=info msg="StopPodSandbox for \"01adcc99929dde708054b2071dd122b4e7dc9d04f7045ba46c54237349d021d9\" returns successfully" Oct 2 19:47:42.149622 kubelet[1443]: I1002 19:47:42.149559 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-kernel\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149622 kubelet[1443]: I1002 19:47:42.149568 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149642 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-xtables-lock\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149682 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-clustermesh-secrets\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149703 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-bpf-maps\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149721 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cni-path\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149737 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hostproc\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149766 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acb31958-de27-449d-a139-3b5783a29c9c-cilium-config-path\") pod \"acb31958-de27-449d-a139-3b5783a29c9c\" (UID: \"acb31958-de27-449d-a139-3b5783a29c9c\") " Oct 2 19:47:42.149786 kubelet[1443]: I1002 19:47:42.149784 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-net\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149803 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-config-path\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149829 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9sns\" (UniqueName: \"kubernetes.io/projected/acb31958-de27-449d-a139-3b5783a29c9c-kube-api-access-l9sns\") pod \"acb31958-de27-449d-a139-3b5783a29c9c\" (UID: \"acb31958-de27-449d-a139-3b5783a29c9c\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149847 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-lib-modules\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149864 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-cgroup\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149887 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-ipsec-secrets\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149912 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzdl9\" (UniqueName: \"kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-kube-api-access-jzdl9\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149929 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-etc-cni-netd\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.149950 kubelet[1443]: I1002 19:47:42.149945 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-run\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.150130 kubelet[1443]: I1002 19:47:42.149963 1443 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hubble-tls\") pod \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\" (UID: \"c36e9a16-e4fb-42e8-9248-de5972ff8c8a\") " Oct 2 19:47:42.150130 kubelet[1443]: I1002 19:47:42.149991 1443 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150230 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cni-path" (OuterVolumeSpecName: "cni-path") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150263 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150495 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150923 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150953 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150978 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.150998 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hostproc" (OuterVolumeSpecName: "hostproc") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.151013 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.151028 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:42.152471 kubelet[1443]: I1002 19:47:42.152417 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:47:42.152836 kubelet[1443]: I1002 19:47:42.152802 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb31958-de27-449d-a139-3b5783a29c9c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acb31958-de27-449d-a139-3b5783a29c9c" (UID: "acb31958-de27-449d-a139-3b5783a29c9c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:47:42.153294 kubelet[1443]: I1002 19:47:42.153272 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:47:42.153591 kubelet[1443]: I1002 19:47:42.153567 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-kube-api-access-jzdl9" (OuterVolumeSpecName: "kube-api-access-jzdl9") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "kube-api-access-jzdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:47:42.155938 kubelet[1443]: I1002 19:47:42.155910 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:47:42.156011 kubelet[1443]: I1002 19:47:42.155959 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb31958-de27-449d-a139-3b5783a29c9c-kube-api-access-l9sns" (OuterVolumeSpecName: "kube-api-access-l9sns") pod "acb31958-de27-449d-a139-3b5783a29c9c" (UID: "acb31958-de27-449d-a139-3b5783a29c9c"). InnerVolumeSpecName "kube-api-access-l9sns". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:47:42.156042 kubelet[1443]: I1002 19:47:42.156014 1443 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c36e9a16-e4fb-42e8-9248-de5972ff8c8a" (UID: "c36e9a16-e4fb-42e8-9248-de5972ff8c8a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:47:42.250348 kubelet[1443]: I1002 19:47:42.250276 1443 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l9sns\" (UniqueName: \"kubernetes.io/projected/acb31958-de27-449d-a139-3b5783a29c9c-kube-api-access-l9sns\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250348 kubelet[1443]: I1002 19:47:42.250321 1443 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250348 kubelet[1443]: I1002 19:47:42.250333 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250348 kubelet[1443]: I1002 19:47:42.250342 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-ipsec-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250348 kubelet[1443]: I1002 19:47:42.250352 1443 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250348 kubelet[1443]: I1002 19:47:42.250362 1443 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jzdl9\" (UniqueName: \"kubernetes.io/projected/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-kube-api-access-jzdl9\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250372 1443 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250380 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250391 1443 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250400 1443 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250408 1443 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250417 1443 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250426 1443 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250449 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acb31958-de27-449d-a139-3b5783a29c9c-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250459 1443 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.250630 kubelet[1443]: I1002 19:47:42.250468 1443 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c36e9a16-e4fb-42e8-9248-de5972ff8c8a-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:47:42.489893 kubelet[1443]: E1002 19:47:42.489850 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:42.828098 kubelet[1443]: I1002 19:47:42.828022 1443 scope.go:117] "RemoveContainer" containerID="697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd" Oct 2 19:47:42.829071 env[1142]: time="2023-10-02T19:47:42.829042651Z" level=info msg="RemoveContainer for \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\"" Oct 2 19:47:42.831754 env[1142]: time="2023-10-02T19:47:42.831719983Z" level=info msg="RemoveContainer for \"697b27b354d52877fc6779c54ecd8bf8e86a673d6b391dd188681313ea78befd\" returns successfully" Oct 2 19:47:42.832054 kubelet[1443]: I1002 19:47:42.832036 1443 scope.go:117] "RemoveContainer" containerID="3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f" Oct 2 19:47:42.832116 systemd[1]: Removed slice kubepods-burstable-podc36e9a16_e4fb_42e8_9248_de5972ff8c8a.slice. Oct 2 19:47:42.833094 env[1142]: time="2023-10-02T19:47:42.833070308Z" level=info msg="RemoveContainer for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\"" Oct 2 19:47:42.835071 env[1142]: time="2023-10-02T19:47:42.835044277Z" level=info msg="RemoveContainer for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" returns successfully" Oct 2 19:47:42.835313 kubelet[1443]: I1002 19:47:42.835293 1443 scope.go:117] "RemoveContainer" containerID="3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f" Oct 2 19:47:42.835634 env[1142]: time="2023-10-02T19:47:42.835570959Z" level=error msg="ContainerStatus for \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\": not found" Oct 2 19:47:42.835947 kubelet[1443]: E1002 19:47:42.835926 1443 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\": not found" containerID="3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f" Oct 2 19:47:42.836005 kubelet[1443]: I1002 19:47:42.835963 1443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f"} err="failed to get container status \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c45d6244af338fd99fe77c0fd469b3f230f02dad64093a8c7eb9ab20aea7b1f\": not found" Oct 2 19:47:42.836314 systemd[1]: Removed slice kubepods-besteffort-podacb31958_de27_449d_a139_3b5783a29c9c.slice. Oct 2 19:47:43.020877 systemd[1]: var-lib-kubelet-pods-acb31958\x2dde27\x2d449d\x2da139\x2d3b5783a29c9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9sns.mount: Deactivated successfully. Oct 2 19:47:43.020973 systemd[1]: var-lib-kubelet-pods-c36e9a16\x2de4fb\x2d42e8\x2d9248\x2dde5972ff8c8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djzdl9.mount: Deactivated successfully. Oct 2 19:47:43.021043 systemd[1]: var-lib-kubelet-pods-c36e9a16\x2de4fb\x2d42e8\x2d9248\x2dde5972ff8c8a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:47:43.021094 systemd[1]: var-lib-kubelet-pods-c36e9a16\x2de4fb\x2d42e8\x2d9248\x2dde5972ff8c8a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:47:43.021142 systemd[1]: var-lib-kubelet-pods-c36e9a16\x2de4fb\x2d42e8\x2d9248\x2dde5972ff8c8a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:47:43.409336 kubelet[1443]: I1002 19:47:43.409290 1443 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="acb31958-de27-449d-a139-3b5783a29c9c" path="/var/lib/kubelet/pods/acb31958-de27-449d-a139-3b5783a29c9c/volumes" Oct 2 19:47:43.409722 kubelet[1443]: I1002 19:47:43.409695 1443 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c36e9a16-e4fb-42e8-9248-de5972ff8c8a" path="/var/lib/kubelet/pods/c36e9a16-e4fb-42e8-9248-de5972ff8c8a/volumes" Oct 2 19:47:43.490880 kubelet[1443]: E1002 19:47:43.490847 1443 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"