Oct 2 19:55:51.746664 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:55:51.746684 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:55:51.746691 kernel: efi: EFI v2.70 by EDK II Oct 2 19:55:51.746697 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:55:51.746702 kernel: random: crng init done Oct 2 19:55:51.746707 kernel: ACPI: Early table checksum verification disabled Oct 2 19:55:51.746714 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:55:51.746720 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:55:51.746726 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746731 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746737 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746742 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746748 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746753 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746761 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746767 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746773 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:51.746778 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:55:51.746784 kernel: NUMA: Failed to initialise from firmware Oct 2 19:55:51.746790 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:51.746796 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Oct 2 19:55:51.746801 kernel: Zone ranges: Oct 2 19:55:51.746807 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:51.746814 kernel: DMA32 empty Oct 2 19:55:51.746819 kernel: Normal empty Oct 2 19:55:51.746825 kernel: Movable zone start for each node Oct 2 19:55:51.746974 kernel: Early memory node ranges Oct 2 19:55:51.746984 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:55:51.746990 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:55:51.746996 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:55:51.747007 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:55:51.747013 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:55:51.747019 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:55:51.747025 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:55:51.747031 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:51.747040 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:55:51.747046 kernel: psci: probing for conduit method from ACPI. Oct 2 19:55:51.747052 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:55:51.747057 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:55:51.747063 kernel: psci: Trusted OS migration not required Oct 2 19:55:51.747071 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:55:51.747077 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:55:51.747085 kernel: ACPI: SRAT not present Oct 2 19:55:51.747091 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:55:51.747097 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:55:51.747104 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:55:51.747110 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:55:51.747116 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:55:51.747122 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:55:51.747128 kernel: CPU features: detected: Spectre-v4 Oct 2 19:55:51.747134 kernel: CPU features: detected: Spectre-BHB Oct 2 19:55:51.747154 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:55:51.747161 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:55:51.747167 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:55:51.747173 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:55:51.747179 kernel: Policy zone: DMA Oct 2 19:55:51.747187 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:55:51.747193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:55:51.747199 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:55:51.747205 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:55:51.747211 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:55:51.747218 kernel: Memory: 2459284K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113004K reserved, 0K cma-reserved) Oct 2 19:55:51.747225 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:55:51.747232 kernel: trace event string verifier disabled Oct 2 19:55:51.747238 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:55:51.747244 kernel: rcu: RCU event tracing is enabled. Oct 2 19:55:51.747251 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:55:51.747257 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:55:51.747263 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:55:51.747269 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:55:51.747275 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:55:51.747281 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:55:51.747287 kernel: GICv3: 256 SPIs implemented Oct 2 19:55:51.747295 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:55:51.747301 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:55:51.747307 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:55:51.747313 kernel: GICv3: 16 PPIs implemented Oct 2 19:55:51.747319 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:55:51.747325 kernel: ACPI: SRAT not present Oct 2 19:55:51.747330 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:55:51.747337 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:55:51.747343 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:55:51.747349 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:55:51.747355 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:55:51.747361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:51.747369 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:55:51.747375 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:55:51.747381 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:55:51.747387 kernel: arm-pv: using stolen time PV Oct 2 19:55:51.747394 kernel: Console: colour dummy device 80x25 Oct 2 19:55:51.747400 kernel: ACPI: Core revision 20210730 Oct 2 19:55:51.747407 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:55:51.747413 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:55:51.747419 kernel: LSM: Security Framework initializing Oct 2 19:55:51.747425 kernel: SELinux: Initializing. Oct 2 19:55:51.747433 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:55:51.747439 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:55:51.747446 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:55:51.747452 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:55:51.747458 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:55:51.747464 kernel: Remapping and enabling EFI services. Oct 2 19:55:51.747471 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:55:51.747477 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:55:51.747483 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:55:51.747491 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:55:51.747497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:51.747503 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:55:51.747510 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:55:51.747516 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:55:51.747522 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:55:51.747529 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:51.747535 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:55:51.747541 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:55:51.747547 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:55:51.747555 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:55:51.747561 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:51.747567 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:55:51.747574 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:55:51.747584 kernel: SMP: Total of 4 processors activated. Oct 2 19:55:51.747592 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:55:51.747598 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:55:51.747605 kernel: CPU features: detected: Common not Private translations Oct 2 19:55:51.747611 kernel: CPU features: detected: CRC32 instructions Oct 2 19:55:51.747618 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:55:51.747624 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:55:51.747631 kernel: CPU features: detected: Privileged Access Never Oct 2 19:55:51.747639 kernel: CPU features: detected: RAS Extension Support Oct 2 19:55:51.747645 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:55:51.747652 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:55:51.747658 kernel: alternatives: patching kernel code Oct 2 19:55:51.747666 kernel: devtmpfs: initialized Oct 2 19:55:51.747673 kernel: KASLR enabled Oct 2 19:55:51.747679 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:55:51.747686 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:55:51.747693 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:55:51.747699 kernel: SMBIOS 3.0.0 present. Oct 2 19:55:51.747706 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:55:51.747712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:55:51.747719 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:55:51.747726 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:55:51.747734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:55:51.747740 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:55:51.747747 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Oct 2 19:55:51.747754 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:55:51.747760 kernel: cpuidle: using governor menu Oct 2 19:55:51.747767 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:55:51.747773 kernel: ASID allocator initialised with 32768 entries Oct 2 19:55:51.747779 kernel: ACPI: bus type PCI registered Oct 2 19:55:51.747786 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:55:51.747793 kernel: Serial: AMBA PL011 UART driver Oct 2 19:55:51.747800 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:55:51.747807 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:55:51.747813 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:55:51.747820 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:55:51.747827 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:55:51.747833 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:55:51.747840 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:55:51.747846 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:55:51.747862 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:55:51.747869 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:55:51.747931 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:55:51.747939 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:55:51.747946 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:55:51.747952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:55:51.747959 kernel: ACPI: Interpreter enabled Oct 2 19:55:51.747965 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:55:51.747972 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:55:51.747981 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:55:51.747988 kernel: printk: console [ttyAMA0] enabled Oct 2 19:55:51.747994 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:55:51.748337 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:55:51.748625 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:55:51.748748 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:55:51.748914 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:55:51.748997 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:55:51.749007 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:55:51.749015 kernel: PCI host bridge to bus 0000:00 Oct 2 19:55:51.749088 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:55:51.749179 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:55:51.749239 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:55:51.749294 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:55:51.749372 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:55:51.749443 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:55:51.749505 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:55:51.749568 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:55:51.749630 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:55:51.749692 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:55:51.749753 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:55:51.749816 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:55:51.749881 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:55:51.749936 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:55:51.749990 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:55:51.749998 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:55:51.750005 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:55:51.750012 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:55:51.750021 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:55:51.750027 kernel: iommu: Default domain type: Translated Oct 2 19:55:51.750034 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:55:51.750041 kernel: vgaarb: loaded Oct 2 19:55:51.750048 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:55:51.750055 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:55:51.750061 kernel: PTP clock support registered Oct 2 19:55:51.750068 kernel: Registered efivars operations Oct 2 19:55:51.750074 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:55:51.750081 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:55:51.750089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:55:51.750096 kernel: pnp: PnP ACPI init Oct 2 19:55:51.750188 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:55:51.750199 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:55:51.750206 kernel: NET: Registered PF_INET protocol family Oct 2 19:55:51.750213 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:55:51.750220 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:55:51.750227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:55:51.750236 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:55:51.750242 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:55:51.750249 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:55:51.750256 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:55:51.750263 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:55:51.750269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:55:51.750290 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:55:51.750297 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:55:51.750305 kernel: kvm [1]: HYP mode not available Oct 2 19:55:51.750312 kernel: Initialise system trusted keyrings Oct 2 19:55:51.750318 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:55:51.750325 kernel: Key type asymmetric registered Oct 2 19:55:51.750332 kernel: Asymmetric key parser 'x509' registered Oct 2 19:55:51.750338 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:55:51.750345 kernel: io scheduler mq-deadline registered Oct 2 19:55:51.750351 kernel: io scheduler kyber registered Oct 2 19:55:51.750358 kernel: io scheduler bfq registered Oct 2 19:55:51.750364 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:55:51.750372 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:55:51.750379 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:55:51.750443 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:55:51.750452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:55:51.750459 kernel: thunder_xcv, ver 1.0 Oct 2 19:55:51.750465 kernel: thunder_bgx, ver 1.0 Oct 2 19:55:51.750472 kernel: nicpf, ver 1.0 Oct 2 19:55:51.750478 kernel: nicvf, ver 1.0 Oct 2 19:55:51.750550 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:55:51.750614 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:55:51 UTC (1696276551) Oct 2 19:55:51.750623 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:55:51.750630 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:55:51.750637 kernel: Segment Routing with IPv6 Oct 2 19:55:51.750643 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:55:51.750650 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:55:51.750657 kernel: Key type dns_resolver registered Oct 2 19:55:51.750663 kernel: registered taskstats version 1 Oct 2 19:55:51.750671 kernel: Loading compiled-in X.509 certificates Oct 2 19:55:51.750678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:55:51.750685 kernel: Key type .fscrypt registered Oct 2 19:55:51.750691 kernel: Key type fscrypt-provisioning registered Oct 2 19:55:51.750698 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:55:51.750705 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:55:51.750712 kernel: ima: No architecture policies found Oct 2 19:55:51.750718 kernel: Freeing unused kernel memory: 34560K Oct 2 19:55:51.750725 kernel: Run /init as init process Oct 2 19:55:51.750732 kernel: with arguments: Oct 2 19:55:51.750739 kernel: /init Oct 2 19:55:51.750745 kernel: with environment: Oct 2 19:55:51.750752 kernel: HOME=/ Oct 2 19:55:51.750758 kernel: TERM=linux Oct 2 19:55:51.750764 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:55:51.750773 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:55:51.750782 systemd[1]: Detected virtualization kvm. Oct 2 19:55:51.750791 systemd[1]: Detected architecture arm64. Oct 2 19:55:51.750798 systemd[1]: Running in initrd. Oct 2 19:55:51.750805 systemd[1]: No hostname configured, using default hostname. Oct 2 19:55:51.750812 systemd[1]: Hostname set to . Oct 2 19:55:51.750820 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:55:51.750827 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:55:51.750834 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:55:51.750841 systemd[1]: Reached target cryptsetup.target. Oct 2 19:55:51.750850 systemd[1]: Reached target paths.target. Oct 2 19:55:51.750926 systemd[1]: Reached target slices.target. Oct 2 19:55:51.750934 systemd[1]: Reached target swap.target. Oct 2 19:55:51.750941 systemd[1]: Reached target timers.target. Oct 2 19:55:51.750948 systemd[1]: Listening on iscsid.socket. Oct 2 19:55:51.750955 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:55:51.750963 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:55:51.750973 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:55:51.750981 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:55:51.750988 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:55:51.750995 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:55:51.751002 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:55:51.751009 systemd[1]: Reached target sockets.target. Oct 2 19:55:51.751016 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:55:51.751024 systemd[1]: Finished network-cleanup.service. Oct 2 19:55:51.751031 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:55:51.751039 systemd[1]: Starting systemd-journald.service... Oct 2 19:55:51.751051 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:55:51.751060 systemd[1]: Starting systemd-resolved.service... Oct 2 19:55:51.751067 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:55:51.751075 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:55:51.751082 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:55:51.751089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:55:51.751103 systemd-journald[291]: Journal started Oct 2 19:55:51.751172 systemd-journald[291]: Runtime Journal (/run/log/journal/48b98711dcd94597927bc76e467c7a9a) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:55:51.741222 systemd-modules-load[292]: Inserted module 'overlay' Oct 2 19:55:51.756543 systemd[1]: Started systemd-journald.service. Oct 2 19:55:51.756586 kernel: audit: type=1130 audit(1696276551.754:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.756598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:55:51.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.755117 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:55:51.760564 kernel: audit: type=1130 audit(1696276551.757:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.760586 kernel: Bridge firewalling registered Oct 2 19:55:51.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.758363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:55:51.764083 kernel: audit: type=1130 audit(1696276551.761:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.761401 systemd-modules-load[292]: Inserted module 'br_netfilter' Oct 2 19:55:51.762254 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:55:51.762747 systemd-resolved[293]: Positive Trust Anchors: Oct 2 19:55:51.762755 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:55:51.762782 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:55:51.767053 systemd-resolved[293]: Defaulting to hostname 'linux'. Oct 2 19:55:51.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.774180 kernel: audit: type=1130 audit(1696276551.771:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.774202 kernel: SCSI subsystem initialized Oct 2 19:55:51.767944 systemd[1]: Started systemd-resolved.service. Oct 2 19:55:51.771766 systemd[1]: Reached target nss-lookup.target. Oct 2 19:55:51.781173 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:55:51.781198 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:55:51.781208 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:55:51.782650 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:55:51.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.784069 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:55:51.786768 kernel: audit: type=1130 audit(1696276551.783:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.786381 systemd-modules-load[292]: Inserted module 'dm_multipath' Oct 2 19:55:51.787292 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:55:51.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.788738 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:55:51.791700 kernel: audit: type=1130 audit(1696276551.787:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.798035 dracut-cmdline[307]: dracut-dracut-053 Oct 2 19:55:51.798194 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:55:51.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.802165 kernel: audit: type=1130 audit(1696276551.799:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.802668 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:55:51.875158 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:55:51.884168 kernel: iscsi: registered transport (tcp) Oct 2 19:55:51.899311 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:55:51.899329 kernel: QLogic iSCSI HBA Driver Oct 2 19:55:51.944607 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:55:51.948231 kernel: audit: type=1130 audit(1696276551.945:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:51.946053 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:55:51.999426 kernel: raid6: neonx8 gen() 13800 MB/s Oct 2 19:55:52.016363 kernel: raid6: neonx8 xor() 10825 MB/s Oct 2 19:55:52.033200 kernel: raid6: neonx4 gen() 13560 MB/s Oct 2 19:55:52.050178 kernel: raid6: neonx4 xor() 11072 MB/s Oct 2 19:55:52.067177 kernel: raid6: neonx2 gen() 12933 MB/s Oct 2 19:55:52.084206 kernel: raid6: neonx2 xor() 10286 MB/s Oct 2 19:55:52.101174 kernel: raid6: neonx1 gen() 10488 MB/s Oct 2 19:55:52.118174 kernel: raid6: neonx1 xor() 8786 MB/s Oct 2 19:55:52.135179 kernel: raid6: int64x8 gen() 6295 MB/s Oct 2 19:55:52.152175 kernel: raid6: int64x8 xor() 3547 MB/s Oct 2 19:55:52.169173 kernel: raid6: int64x4 gen() 7250 MB/s Oct 2 19:55:52.186178 kernel: raid6: int64x4 xor() 3854 MB/s Oct 2 19:55:52.203180 kernel: raid6: int64x2 gen() 6150 MB/s Oct 2 19:55:52.220179 kernel: raid6: int64x2 xor() 3320 MB/s Oct 2 19:55:52.237179 kernel: raid6: int64x1 gen() 5044 MB/s Oct 2 19:55:52.254327 kernel: raid6: int64x1 xor() 2645 MB/s Oct 2 19:55:52.254381 kernel: raid6: using algorithm neonx8 gen() 13800 MB/s Oct 2 19:55:52.254391 kernel: raid6: .... xor() 10825 MB/s, rmw enabled Oct 2 19:55:52.254399 kernel: raid6: using neon recovery algorithm Oct 2 19:55:52.265312 kernel: xor: measuring software checksum speed Oct 2 19:55:52.265362 kernel: 8regs : 17282 MB/sec Oct 2 19:55:52.266158 kernel: 32regs : 20755 MB/sec Oct 2 19:55:52.267240 kernel: arm64_neon : 27835 MB/sec Oct 2 19:55:52.267265 kernel: xor: using function: arm64_neon (27835 MB/sec) Oct 2 19:55:52.318199 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:55:52.330428 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:55:52.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:52.331873 systemd[1]: Starting systemd-udevd.service... Oct 2 19:55:52.334452 kernel: audit: type=1130 audit(1696276552.330:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:52.331000 audit: BPF prog-id=7 op=LOAD Oct 2 19:55:52.331000 audit: BPF prog-id=8 op=LOAD Oct 2 19:55:52.348228 systemd-udevd[491]: Using default interface naming scheme 'v252'. Oct 2 19:55:52.352479 systemd[1]: Started systemd-udevd.service. Oct 2 19:55:52.353803 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:55:52.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:52.368417 dracut-pre-trigger[493]: rd.md=0: removing MD RAID activation Oct 2 19:55:52.402551 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:55:52.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:52.403887 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:55:52.439379 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:55:52.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:52.470937 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:55:52.473189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:52.485775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:55:52.488163 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (547) Oct 2 19:55:52.491094 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:55:52.497328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:55:52.501729 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:55:52.502473 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:55:52.504667 systemd[1]: Starting disk-uuid.service... Oct 2 19:55:52.513161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:53.525815 disk-uuid[567]: The operation has completed successfully. Oct 2 19:55:53.526767 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:53.555111 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:55:53.555265 systemd[1]: Finished disk-uuid.service. Oct 2 19:55:53.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.557220 systemd[1]: Starting verity-setup.service... Oct 2 19:55:53.576202 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:55:53.600755 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:55:53.602931 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:55:53.604339 systemd[1]: Finished verity-setup.service. Oct 2 19:55:53.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.657157 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:55:53.657407 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:55:53.658585 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:55:53.660319 systemd[1]: Starting ignition-setup.service... Oct 2 19:55:53.662712 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:55:53.673327 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:55:53.673367 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:55:53.673377 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:55:53.684683 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:55:53.693752 systemd[1]: Finished ignition-setup.service. Oct 2 19:55:53.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.695190 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:55:53.774002 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:55:53.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.775000 audit: BPF prog-id=9 op=LOAD Oct 2 19:55:53.776286 systemd[1]: Starting systemd-networkd.service... Oct 2 19:55:53.791981 ignition[648]: Ignition 2.14.0 Oct 2 19:55:53.791992 ignition[648]: Stage: fetch-offline Oct 2 19:55:53.792036 ignition[648]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:53.792045 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:53.792221 ignition[648]: parsed url from cmdline: "" Oct 2 19:55:53.792225 ignition[648]: no config URL provided Oct 2 19:55:53.792230 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:55:53.792238 ignition[648]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:55:53.792256 ignition[648]: op(1): [started] loading QEMU firmware config module Oct 2 19:55:53.792261 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:55:53.800407 ignition[648]: op(1): [finished] loading QEMU firmware config module Oct 2 19:55:53.804950 systemd-networkd[741]: lo: Link UP Oct 2 19:55:53.804960 systemd-networkd[741]: lo: Gained carrier Oct 2 19:55:53.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.805618 systemd-networkd[741]: Enumeration completed Oct 2 19:55:53.805784 systemd[1]: Started systemd-networkd.service. Oct 2 19:55:53.806007 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:55:53.806590 systemd[1]: Reached target network.target. Oct 2 19:55:53.807490 systemd-networkd[741]: eth0: Link UP Oct 2 19:55:53.807494 systemd-networkd[741]: eth0: Gained carrier Oct 2 19:55:53.812906 systemd[1]: Starting iscsiuio.service... Oct 2 19:55:53.822441 systemd[1]: Started iscsiuio.service. Oct 2 19:55:53.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.824203 systemd[1]: Starting iscsid.service... Oct 2 19:55:53.828273 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:55:53.828273 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:55:53.828273 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:55:53.828273 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:55:53.828273 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:55:53.828273 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:55:53.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.831714 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:55:53.839797 ignition[648]: parsing config with SHA512: a13b44597cd277d2acffc3a1631f6b76d0a86cc4f4a4d6d58738ab105c8f15f625e3e89df99eb134a76be00fb6e667b7b8595611f58484ba631183a1edcf8a1a Oct 2 19:55:53.831715 systemd[1]: Started iscsid.service. Oct 2 19:55:53.835665 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:55:53.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.854678 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:55:53.855478 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:55:53.856089 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:55:53.856740 systemd[1]: Reached target remote-fs.target. Oct 2 19:55:53.858085 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:55:53.864525 unknown[648]: fetched base config from "system" Oct 2 19:55:53.864536 unknown[648]: fetched user config from "qemu" Oct 2 19:55:53.867342 ignition[648]: fetch-offline: fetch-offline passed Oct 2 19:55:53.867417 ignition[648]: Ignition finished successfully Oct 2 19:55:53.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.868963 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:55:53.869965 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:55:53.870990 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:55:53.871724 systemd[1]: Starting ignition-kargs.service... Oct 2 19:55:53.884423 ignition[762]: Ignition 2.14.0 Oct 2 19:55:53.884432 ignition[762]: Stage: kargs Oct 2 19:55:53.884527 ignition[762]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:53.884540 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:53.885424 ignition[762]: kargs: kargs passed Oct 2 19:55:53.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.885474 ignition[762]: Ignition finished successfully Oct 2 19:55:53.888381 systemd[1]: Finished ignition-kargs.service. Oct 2 19:55:53.889968 systemd[1]: Starting ignition-disks.service... Oct 2 19:55:53.898000 ignition[769]: Ignition 2.14.0 Oct 2 19:55:53.898010 ignition[769]: Stage: disks Oct 2 19:55:53.898108 ignition[769]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:53.898118 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:53.898954 ignition[769]: disks: disks passed Oct 2 19:55:53.899891 systemd[1]: Finished ignition-disks.service. Oct 2 19:55:53.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.898995 ignition[769]: Ignition finished successfully Oct 2 19:55:53.901592 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:55:53.902547 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:55:53.903528 systemd[1]: Reached target local-fs.target. Oct 2 19:55:53.904476 systemd[1]: Reached target sysinit.target. Oct 2 19:55:53.905371 systemd[1]: Reached target basic.target. Oct 2 19:55:53.907249 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:55:53.920059 systemd-fsck[777]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:55:53.923941 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:55:53.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:53.925694 systemd[1]: Mounting sysroot.mount... Oct 2 19:55:53.938161 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:55:53.938668 systemd[1]: Mounted sysroot.mount. Oct 2 19:55:53.939259 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:55:53.941366 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:55:53.942054 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:55:53.942091 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:55:53.942113 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:55:53.944702 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:55:53.945957 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:55:53.951304 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:55:53.956315 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:55:53.960993 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:55:53.965811 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:55:54.002881 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:55:54.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.006863 systemd[1]: Starting ignition-mount.service... Oct 2 19:55:54.010228 systemd[1]: Starting sysroot-boot.service... Oct 2 19:55:54.017442 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:55:54.027728 ignition[829]: INFO : Ignition 2.14.0 Oct 2 19:55:54.027728 ignition[829]: INFO : Stage: mount Oct 2 19:55:54.029563 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:54.029563 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:54.029563 ignition[829]: INFO : mount: mount passed Oct 2 19:55:54.029563 ignition[829]: INFO : Ignition finished successfully Oct 2 19:55:54.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.029764 systemd[1]: Finished ignition-mount.service. Oct 2 19:55:54.051841 systemd[1]: Finished sysroot-boot.service. Oct 2 19:55:54.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.614570 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:55:54.621150 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) Oct 2 19:55:54.622368 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:55:54.622382 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:55:54.622391 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:55:54.625979 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:55:54.627696 systemd[1]: Starting ignition-files.service... Oct 2 19:55:54.644919 ignition[858]: INFO : Ignition 2.14.0 Oct 2 19:55:54.644919 ignition[858]: INFO : Stage: files Oct 2 19:55:54.646216 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:54.646216 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:54.646216 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:55:54.651737 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:55:54.651737 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:55:54.654370 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:55:54.655457 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:55:54.655457 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:55:54.655211 unknown[858]: wrote ssh authorized keys file for user: core Oct 2 19:55:54.658721 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:55:54.658721 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:55:54.823646 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:55:55.021415 systemd-networkd[741]: eth0: Gained IPv6LL Oct 2 19:55:55.086660 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:55:55.088741 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:55:55.088741 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:55:55.088741 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:55:55.159897 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:55:55.236856 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:55:55.238945 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:55:55.238945 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:55:55.238945 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:55:55.354075 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:55:55.945871 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:55:55.947964 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:55:55.947964 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:55:55.947964 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:55:56.001088 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:55:58.134240 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:55:58.136723 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:55:58.136723 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:58.136723 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:58.136723 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:58.136723 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:58.136723 ignition[858]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:55:58.136723 ignition[858]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:55:58.146046 ignition[858]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:55:58.205462 ignition[858]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:55:58.206696 ignition[858]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:55:58.206696 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:58.206696 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:58.206696 ignition[858]: INFO : files: files passed Oct 2 19:55:58.206696 ignition[858]: INFO : Ignition finished successfully Oct 2 19:55:58.216070 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:55:58.216092 kernel: audit: type=1130 audit(1696276558.208:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.207005 systemd[1]: Finished ignition-files.service. Oct 2 19:55:58.210077 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:55:58.212693 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:55:58.219334 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:55:58.224637 kernel: audit: type=1130 audit(1696276558.219:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.224660 kernel: audit: type=1131 audit(1696276558.219:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.213414 systemd[1]: Starting ignition-quench.service... Oct 2 19:55:58.225626 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:55:58.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.218193 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:55:58.230491 kernel: audit: type=1130 audit(1696276558.226:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.218276 systemd[1]: Finished ignition-quench.service. Oct 2 19:55:58.225028 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:55:58.226357 systemd[1]: Reached target ignition-complete.target. Oct 2 19:55:58.230629 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:55:58.246202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:55:58.246300 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:55:58.251194 kernel: audit: type=1130 audit(1696276558.247:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.251215 kernel: audit: type=1131 audit(1696276558.247:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.247581 systemd[1]: Reached target initrd-fs.target. Oct 2 19:55:58.251686 systemd[1]: Reached target initrd.target. Oct 2 19:55:58.252670 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:55:58.253595 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:55:58.265673 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:55:58.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.267169 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:55:58.269608 kernel: audit: type=1130 audit(1696276558.266:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.276680 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:55:58.277393 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:55:58.278469 systemd[1]: Stopped target timers.target. Oct 2 19:55:58.279478 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:55:58.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.279593 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:55:58.283655 kernel: audit: type=1131 audit(1696276558.280:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.280582 systemd[1]: Stopped target initrd.target. Oct 2 19:55:58.283262 systemd[1]: Stopped target basic.target. Oct 2 19:55:58.284223 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:55:58.285292 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:55:58.286254 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:55:58.287364 systemd[1]: Stopped target remote-fs.target. Oct 2 19:55:58.288384 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:55:58.289477 systemd[1]: Stopped target sysinit.target. Oct 2 19:55:58.290450 systemd[1]: Stopped target local-fs.target. Oct 2 19:55:58.291484 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:55:58.292493 systemd[1]: Stopped target swap.target. Oct 2 19:55:58.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.293446 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:55:58.297995 kernel: audit: type=1131 audit(1696276558.294:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.293560 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:55:58.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.294569 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:55:58.301851 kernel: audit: type=1131 audit(1696276558.298:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.297454 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:55:58.297560 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:55:58.298736 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:55:58.298843 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:55:58.301555 systemd[1]: Stopped target paths.target. Oct 2 19:55:58.302357 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:55:58.305177 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:55:58.306046 systemd[1]: Stopped target slices.target. Oct 2 19:55:58.307170 systemd[1]: Stopped target sockets.target. Oct 2 19:55:58.308227 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:55:58.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.308296 systemd[1]: Closed iscsid.socket. Oct 2 19:55:58.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.309092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:55:58.309212 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:55:58.310179 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:55:58.310269 systemd[1]: Stopped ignition-files.service. Oct 2 19:55:58.312099 systemd[1]: Stopping ignition-mount.service... Oct 2 19:55:58.313031 systemd[1]: Stopping iscsiuio.service... Oct 2 19:55:58.315370 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:55:58.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.316051 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:55:58.316207 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:55:58.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.317733 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:55:58.317851 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:55:58.324352 ignition[898]: INFO : Ignition 2.14.0 Oct 2 19:55:58.324352 ignition[898]: INFO : Stage: umount Oct 2 19:55:58.324352 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:58.324352 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:58.324352 ignition[898]: INFO : umount: umount passed Oct 2 19:55:58.324352 ignition[898]: INFO : Ignition finished successfully Oct 2 19:55:58.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.320206 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:55:58.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.320299 systemd[1]: Stopped iscsiuio.service. Oct 2 19:55:58.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.321749 systemd[1]: Stopped target network.target. Oct 2 19:55:58.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.322561 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:55:58.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.322589 systemd[1]: Closed iscsiuio.socket. Oct 2 19:55:58.324010 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:55:58.324977 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:55:58.326036 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:55:58.326110 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:55:58.327265 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:55:58.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.327340 systemd[1]: Stopped ignition-mount.service. Oct 2 19:55:58.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.329038 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:55:58.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.329090 systemd[1]: Stopped ignition-disks.service. Oct 2 19:55:58.329647 systemd-networkd[741]: eth0: DHCPv6 lease lost Oct 2 19:55:58.349000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:55:58.332039 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:55:58.332081 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:55:58.333313 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:55:58.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.333355 systemd[1]: Stopped ignition-setup.service. Oct 2 19:55:58.334635 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:55:58.334719 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:55:58.356000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:55:58.336592 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:55:58.336907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:55:58.336932 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:55:58.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.339884 systemd[1]: Stopping network-cleanup.service... Oct 2 19:55:58.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.340991 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:55:58.341050 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:55:58.342232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:55:58.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.342279 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:55:58.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.344122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:55:58.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.344195 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:55:58.346384 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:55:58.351570 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:55:58.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.352092 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:55:58.352216 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:55:58.357378 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:55:58.357503 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:55:58.358946 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:55:58.359034 systemd[1]: Stopped network-cleanup.service. Oct 2 19:55:58.359947 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:55:58.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.359982 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:55:58.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.361020 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:55:58.361054 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:55:58.362183 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:55:58.362225 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:55:58.363379 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:55:58.363418 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:55:58.364641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:55:58.364683 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:55:58.366582 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:55:58.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.367631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:55:58.367698 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:55:58.372555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:55:58.372601 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:55:58.373702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:55:58.373742 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:55:58.375819 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:55:58.380589 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:55:58.380679 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:55:58.412211 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:55:58.412304 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:55:58.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.413609 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:55:58.414534 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:55:58.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.414585 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:55:58.416461 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:55:58.429236 systemd[1]: Switching root. Oct 2 19:55:58.453972 iscsid[747]: iscsid shutting down. Oct 2 19:55:58.454574 systemd-journald[291]: Received SIGTERM from PID 1 (n/a). Oct 2 19:55:58.454623 systemd-journald[291]: Journal stopped Oct 2 19:56:00.607223 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:56:00.607319 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:56:00.607340 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:56:00.607350 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:56:00.607370 kernel: SELinux: policy capability open_perms=1 Oct 2 19:56:00.607379 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:56:00.607389 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:56:00.607399 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:56:00.607419 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:56:00.607431 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:56:00.607440 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:56:00.607451 systemd[1]: Successfully loaded SELinux policy in 33.376ms. Oct 2 19:56:00.607474 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.055ms. Oct 2 19:56:00.607495 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:56:00.607506 systemd[1]: Detected virtualization kvm. Oct 2 19:56:00.607518 systemd[1]: Detected architecture arm64. Oct 2 19:56:00.607528 systemd[1]: Detected first boot. Oct 2 19:56:00.607539 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:56:00.607549 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:56:00.607560 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:00.607571 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:00.607585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:00.607597 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:56:00.607607 systemd[1]: Stopped iscsid.service. Oct 2 19:56:00.607618 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:56:00.607628 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:56:00.607639 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:56:00.607649 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:56:00.607661 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:56:00.607673 systemd[1]: Created slice system-getty.slice. Oct 2 19:56:00.607683 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:56:00.607693 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:56:00.607704 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:56:00.607714 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:56:00.607725 systemd[1]: Created slice user.slice. Oct 2 19:56:00.607735 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:56:00.607745 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:56:00.607756 systemd[1]: Set up automount boot.automount. Oct 2 19:56:00.607767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:56:00.607778 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:56:00.607789 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:56:00.607800 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:56:00.607810 systemd[1]: Reached target integritysetup.target. Oct 2 19:56:00.607826 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:56:00.607840 systemd[1]: Reached target remote-fs.target. Oct 2 19:56:00.607852 systemd[1]: Reached target slices.target. Oct 2 19:56:00.607863 systemd[1]: Reached target swap.target. Oct 2 19:56:00.607873 systemd[1]: Reached target torcx.target. Oct 2 19:56:00.607883 systemd[1]: Reached target veritysetup.target. Oct 2 19:56:00.607893 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:56:00.607905 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:56:00.607915 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:56:00.607926 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:56:00.607937 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:56:00.607947 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:56:00.607959 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:56:00.607970 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:56:00.607980 systemd[1]: Mounting media.mount... Oct 2 19:56:00.607991 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:56:00.608002 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:56:00.608012 systemd[1]: Mounting tmp.mount... Oct 2 19:56:00.608022 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:56:00.608032 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:56:00.608043 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:56:00.608055 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:56:00.608066 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:56:00.608077 systemd[1]: Starting modprobe@drm.service... Oct 2 19:56:00.608087 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:56:00.608098 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:56:00.608108 systemd[1]: Starting modprobe@loop.service... Oct 2 19:56:00.608129 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:56:00.608151 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:56:00.608164 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:56:00.608176 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:56:00.608187 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:56:00.608197 systemd[1]: Stopped systemd-journald.service. Oct 2 19:56:00.608207 systemd[1]: Starting systemd-journald.service... Oct 2 19:56:00.608217 kernel: loop: module loaded Oct 2 19:56:00.608227 kernel: fuse: init (API version 7.34) Oct 2 19:56:00.608238 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:56:00.608249 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:56:00.608259 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:56:00.608271 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:56:00.608281 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:56:00.608292 systemd[1]: Stopped verity-setup.service. Oct 2 19:56:00.608303 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:56:00.608313 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:56:00.608324 systemd[1]: Mounted media.mount. Oct 2 19:56:00.608335 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:56:00.608345 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:56:00.608356 systemd[1]: Mounted tmp.mount. Oct 2 19:56:00.608368 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:56:00.608381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:56:00.608391 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:56:00.608402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:56:00.608412 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:56:00.608423 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:56:00.608434 systemd[1]: Finished modprobe@drm.service. Oct 2 19:56:00.608446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:56:00.608456 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:56:00.608467 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:56:00.608480 systemd-journald[995]: Journal started Oct 2 19:56:00.608523 systemd-journald[995]: Runtime Journal (/run/log/journal/48b98711dcd94597927bc76e467c7a9a) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:55:58.518000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:55:58.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:58.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:58.674000 audit: BPF prog-id=10 op=LOAD Oct 2 19:55:58.674000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:55:58.674000 audit: BPF prog-id=11 op=LOAD Oct 2 19:55:58.674000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:56:00.486000 audit: BPF prog-id=12 op=LOAD Oct 2 19:56:00.486000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:56:00.486000 audit: BPF prog-id=13 op=LOAD Oct 2 19:56:00.486000 audit: BPF prog-id=14 op=LOAD Oct 2 19:56:00.486000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:56:00.486000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:56:00.487000 audit: BPF prog-id=15 op=LOAD Oct 2 19:56:00.487000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:56:00.488000 audit: BPF prog-id=16 op=LOAD Oct 2 19:56:00.488000 audit: BPF prog-id=17 op=LOAD Oct 2 19:56:00.488000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:56:00.488000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:56:00.488000 audit: BPF prog-id=18 op=LOAD Oct 2 19:56:00.488000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:56:00.488000 audit: BPF prog-id=19 op=LOAD Oct 2 19:56:00.488000 audit: BPF prog-id=20 op=LOAD Oct 2 19:56:00.488000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:56:00.488000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:56:00.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.496000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:56:00.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.572000 audit: BPF prog-id=21 op=LOAD Oct 2 19:56:00.572000 audit: BPF prog-id=22 op=LOAD Oct 2 19:56:00.572000 audit: BPF prog-id=23 op=LOAD Oct 2 19:56:00.572000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:56:00.572000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:56:00.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.602000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:56:00.602000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffed4e9ed0 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:00.602000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:56:00.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.609537 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:56:00.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:58.730836 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:00.485541 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:55:58.731554 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:56:00.485555 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:55:58.731575 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:56:00.489366 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:55:58.731609 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:55:58.731618 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:55:58.731645 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:55:58.731657 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:55:58.731862 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:55:58.731894 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:55:58.731906 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:55:58.733098 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:55:58.733132 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:55:58.733171 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:55:58.733186 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:55:58.733204 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:55:58.733217 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:55:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:56:00.241304 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:56:00Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:00.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.611160 systemd[1]: Started systemd-journald.service. Oct 2 19:56:00.241574 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:56:00Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:00.241676 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:56:00Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:00.241844 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:56:00Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:00.241897 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:56:00Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:56:00.241955 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:56:00Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:56:00.611513 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:56:00.611678 systemd[1]: Finished modprobe@loop.service. Oct 2 19:56:00.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.612747 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:56:00.613694 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:56:00.614767 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:56:00.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.615895 systemd[1]: Reached target network-pre.target. Oct 2 19:56:00.618733 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:56:00.620615 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:56:00.621380 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:56:00.623326 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:56:00.625828 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:56:00.626626 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:56:00.627677 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:56:00.628444 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:56:00.629770 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:56:00.631703 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:56:00.634694 systemd-journald[995]: Time spent on flushing to /var/log/journal/48b98711dcd94597927bc76e467c7a9a is 14.247ms for 990 entries. Oct 2 19:56:00.634694 systemd-journald[995]: System Journal (/var/log/journal/48b98711dcd94597927bc76e467c7a9a) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:56:00.664354 systemd-journald[995]: Received client request to flush runtime journal. Oct 2 19:56:00.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.633530 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:56:00.645154 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:56:00.645946 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:56:00.647909 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:56:00.653237 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:56:00.654104 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:56:00.656175 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:56:00.657875 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:56:00.665708 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:56:00.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.666830 udevadm[1034]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:56:00.675977 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:56:00.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.677886 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:56:00.696994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:56:00.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.019914 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:56:01.021000 audit: BPF prog-id=24 op=LOAD Oct 2 19:56:01.021000 audit: BPF prog-id=25 op=LOAD Oct 2 19:56:01.021000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:56:01.021000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:56:01.021945 systemd[1]: Starting systemd-udevd.service... Oct 2 19:56:01.042045 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Oct 2 19:56:01.057831 systemd[1]: Started systemd-udevd.service. Oct 2 19:56:01.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.059000 audit: BPF prog-id=26 op=LOAD Oct 2 19:56:01.061205 systemd[1]: Starting systemd-networkd.service... Oct 2 19:56:01.070000 audit: BPF prog-id=27 op=LOAD Oct 2 19:56:01.070000 audit: BPF prog-id=28 op=LOAD Oct 2 19:56:01.070000 audit: BPF prog-id=29 op=LOAD Oct 2 19:56:01.071395 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:56:01.084877 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:56:01.118300 systemd[1]: Started systemd-userdbd.service. Oct 2 19:56:01.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.122035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:56:01.169678 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:56:01.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.171752 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:56:01.186424 systemd-networkd[1046]: lo: Link UP Oct 2 19:56:01.186693 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:56:01.186873 systemd-networkd[1046]: lo: Gained carrier Oct 2 19:56:01.187387 systemd-networkd[1046]: Enumeration completed Oct 2 19:56:01.187582 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:56:01.187590 systemd[1]: Started systemd-networkd.service. Oct 2 19:56:01.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.195323 systemd-networkd[1046]: eth0: Link UP Oct 2 19:56:01.195334 systemd-networkd[1046]: eth0: Gained carrier Oct 2 19:56:01.220304 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:56:01.221041 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:56:01.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.221899 systemd[1]: Reached target cryptsetup.target. Oct 2 19:56:01.223896 systemd[1]: Starting lvm2-activation.service... Oct 2 19:56:01.228372 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:56:01.260114 systemd[1]: Finished lvm2-activation.service. Oct 2 19:56:01.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.260868 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:56:01.261506 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:56:01.261538 systemd[1]: Reached target local-fs.target. Oct 2 19:56:01.262067 systemd[1]: Reached target machines.target. Oct 2 19:56:01.263927 systemd[1]: Starting ldconfig.service... Oct 2 19:56:01.264941 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.265006 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:01.266321 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:56:01.268035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:56:01.270340 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:56:01.272039 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.272108 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.273471 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:56:01.275751 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Oct 2 19:56:01.277113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:56:01.283014 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:56:01.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.290000 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:56:01.292568 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:56:01.294441 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:56:01.379687 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:56:01.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.385797 systemd-fsck[1082]: fsck.fat 4.2 (2021-01-31) Oct 2 19:56:01.385797 systemd-fsck[1082]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:56:01.393859 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:56:01.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.514586 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:56:01.524055 systemd[1]: Finished ldconfig.service. Oct 2 19:56:01.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.589767 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:56:01.591492 systemd[1]: Mounting boot.mount... Oct 2 19:56:01.599528 systemd[1]: Mounted boot.mount. Oct 2 19:56:01.607347 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:56:01.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.663241 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:56:01.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.665752 systemd[1]: Starting audit-rules.service... Oct 2 19:56:01.667838 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:56:01.671022 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:56:01.673000 audit: BPF prog-id=30 op=LOAD Oct 2 19:56:01.674728 systemd[1]: Starting systemd-resolved.service... Oct 2 19:56:01.677000 audit: BPF prog-id=31 op=LOAD Oct 2 19:56:01.678206 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:56:01.679918 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:56:01.681090 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:56:01.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.682183 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:56:01.685000 audit[1097]: SYSTEM_BOOT pid=1097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.689107 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:56:01.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.691275 systemd[1]: Starting systemd-update-done.service... Oct 2 19:56:01.692115 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:56:01.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.698524 systemd[1]: Finished systemd-update-done.service. Oct 2 19:56:01.713000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:56:01.713000 audit[1107]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc404d490 a2=420 a3=0 items=0 ppid=1086 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:01.713000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:56:01.714348 augenrules[1107]: No rules Oct 2 19:56:01.715356 systemd[1]: Finished audit-rules.service. Oct 2 19:56:01.725858 systemd-resolved[1092]: Positive Trust Anchors: Oct 2 19:56:01.725871 systemd-resolved[1092]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:56:01.725899 systemd-resolved[1092]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:56:01.728047 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:56:01.729984 systemd[1]: Reached target time-set.target. Oct 2 19:56:01.731803 systemd-timesyncd[1096]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:56:01.731908 systemd-timesyncd[1096]: Initial clock synchronization to Mon 2023-10-02 19:56:01.994880 UTC. Oct 2 19:56:01.749347 systemd-resolved[1092]: Defaulting to hostname 'linux'. Oct 2 19:56:01.750837 systemd[1]: Started systemd-resolved.service. Oct 2 19:56:01.751603 systemd[1]: Reached target network.target. Oct 2 19:56:01.752169 systemd[1]: Reached target nss-lookup.target. Oct 2 19:56:01.752717 systemd[1]: Reached target sysinit.target. Oct 2 19:56:01.753323 systemd[1]: Started motdgen.path. Oct 2 19:56:01.753811 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:56:01.754728 systemd[1]: Started logrotate.timer. Oct 2 19:56:01.755346 systemd[1]: Started mdadm.timer. Oct 2 19:56:01.755803 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:56:01.756389 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:56:01.756415 systemd[1]: Reached target paths.target. Oct 2 19:56:01.756915 systemd[1]: Reached target timers.target. Oct 2 19:56:01.758088 systemd[1]: Listening on dbus.socket. Oct 2 19:56:01.759754 systemd[1]: Starting docker.socket... Oct 2 19:56:01.763002 systemd[1]: Listening on sshd.socket. Oct 2 19:56:01.763651 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:01.764100 systemd[1]: Listening on docker.socket. Oct 2 19:56:01.764789 systemd[1]: Reached target sockets.target. Oct 2 19:56:01.765342 systemd[1]: Reached target basic.target. Oct 2 19:56:01.765878 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.765907 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.766958 systemd[1]: Starting containerd.service... Oct 2 19:56:01.768544 systemd[1]: Starting dbus.service... Oct 2 19:56:01.770009 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:56:01.771689 systemd[1]: Starting extend-filesystems.service... Oct 2 19:56:01.772409 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:56:01.773636 systemd[1]: Starting motdgen.service... Oct 2 19:56:01.775271 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:56:01.780169 systemd[1]: Starting prepare-critools.service... Oct 2 19:56:01.781051 jq[1117]: false Oct 2 19:56:01.781878 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:56:01.783719 systemd[1]: Starting sshd-keygen.service... Oct 2 19:56:01.787944 systemd[1]: Starting systemd-logind.service... Oct 2 19:56:01.789176 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:01.789246 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:56:01.789759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:56:01.790465 systemd[1]: Starting update-engine.service... Oct 2 19:56:01.792058 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:56:01.796798 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:56:01.796978 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:56:01.797708 jq[1133]: true Oct 2 19:56:01.798656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:56:01.798827 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:56:01.805776 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:56:01.805953 systemd[1]: Finished motdgen.service. Oct 2 19:56:01.812105 tar[1139]: ./ Oct 2 19:56:01.812105 tar[1139]: ./macvlan Oct 2 19:56:01.813891 dbus-daemon[1116]: [system] SELinux support is enabled Oct 2 19:56:01.816167 systemd[1]: Started dbus.service. Oct 2 19:56:01.818278 extend-filesystems[1118]: Found vda Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda1 Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda2 Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda3 Oct 2 19:56:01.819199 extend-filesystems[1118]: Found usr Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda4 Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda6 Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda7 Oct 2 19:56:01.819199 extend-filesystems[1118]: Found vda9 Oct 2 19:56:01.819199 extend-filesystems[1118]: Checking size of /dev/vda9 Oct 2 19:56:01.818510 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:56:01.835131 tar[1140]: crictl Oct 2 19:56:01.836344 jq[1141]: true Oct 2 19:56:01.818534 systemd[1]: Reached target system-config.target. Oct 2 19:56:01.820876 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:56:01.820904 systemd[1]: Reached target user-config.target. Oct 2 19:56:01.852174 extend-filesystems[1118]: Old size kept for /dev/vda9 Oct 2 19:56:01.851919 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:56:01.853245 systemd[1]: Finished extend-filesystems.service. Oct 2 19:56:01.876559 tar[1139]: ./static Oct 2 19:56:01.890243 update_engine[1132]: I1002 19:56:01.889153 1132 main.cc:92] Flatcar Update Engine starting Oct 2 19:56:01.902788 tar[1139]: ./vlan Oct 2 19:56:01.907477 systemd[1]: Started update-engine.service. Oct 2 19:56:01.907614 update_engine[1132]: I1002 19:56:01.907500 1132 update_check_scheduler.cc:74] Next update check in 2m24s Oct 2 19:56:01.911461 systemd[1]: Started locksmithd.service. Oct 2 19:56:01.928338 systemd-logind[1129]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:56:01.928835 systemd-logind[1129]: New seat seat0. Oct 2 19:56:01.936767 systemd[1]: Started systemd-logind.service. Oct 2 19:56:01.942510 tar[1139]: ./portmap Oct 2 19:56:01.959405 env[1142]: time="2023-10-02T19:56:01.959348160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:56:01.961208 bash[1169]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:56:01.962288 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:56:01.977614 tar[1139]: ./host-local Oct 2 19:56:01.980667 env[1142]: time="2023-10-02T19:56:01.980620080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:56:01.980797 env[1142]: time="2023-10-02T19:56:01.980777040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:01.991534 env[1142]: time="2023-10-02T19:56:01.991476400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:01.991534 env[1142]: time="2023-10-02T19:56:01.991524240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:01.991809 env[1142]: time="2023-10-02T19:56:01.991782960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:01.991809 env[1142]: time="2023-10-02T19:56:01.991805400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:01.991903 env[1142]: time="2023-10-02T19:56:01.991828640Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:56:01.991903 env[1142]: time="2023-10-02T19:56:01.991840400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:01.991950 env[1142]: time="2023-10-02T19:56:01.991918720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:01.992249 env[1142]: time="2023-10-02T19:56:01.992225120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:01.992373 env[1142]: time="2023-10-02T19:56:01.992351720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:01.992373 env[1142]: time="2023-10-02T19:56:01.992370520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:56:01.992448 env[1142]: time="2023-10-02T19:56:01.992429680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:56:01.992486 env[1142]: time="2023-10-02T19:56:01.992449160Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:56:01.999165 env[1142]: time="2023-10-02T19:56:01.999107320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:56:01.999165 env[1142]: time="2023-10-02T19:56:01.999168720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:56:01.999306 env[1142]: time="2023-10-02T19:56:01.999182240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:56:01.999306 env[1142]: time="2023-10-02T19:56:01.999217120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999306 env[1142]: time="2023-10-02T19:56:01.999232160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999306 env[1142]: time="2023-10-02T19:56:01.999248200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999306 env[1142]: time="2023-10-02T19:56:01.999261200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999668 env[1142]: time="2023-10-02T19:56:01.999642400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999718 env[1142]: time="2023-10-02T19:56:01.999669720Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999718 env[1142]: time="2023-10-02T19:56:01.999685160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999718 env[1142]: time="2023-10-02T19:56:01.999698480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:56:01.999718 env[1142]: time="2023-10-02T19:56:01.999712480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:56:01.999887 env[1142]: time="2023-10-02T19:56:01.999864480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:56:01.999974 env[1142]: time="2023-10-02T19:56:01.999957080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:56:02.000257 env[1142]: time="2023-10-02T19:56:02.000234920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:56:02.000311 env[1142]: time="2023-10-02T19:56:02.000266774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000311 env[1142]: time="2023-10-02T19:56:02.000280946Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:56:02.000517 env[1142]: time="2023-10-02T19:56:02.000499672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000548 env[1142]: time="2023-10-02T19:56:02.000516901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000548 env[1142]: time="2023-10-02T19:56:02.000530948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000548 env[1142]: time="2023-10-02T19:56:02.000543550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000630 env[1142]: time="2023-10-02T19:56:02.000556234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000630 env[1142]: time="2023-10-02T19:56:02.000569537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000630 env[1142]: time="2023-10-02T19:56:02.000583006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000630 env[1142]: time="2023-10-02T19:56:02.000595401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000630 env[1142]: time="2023-10-02T19:56:02.000608911Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:56:02.000756 env[1142]: time="2023-10-02T19:56:02.000735049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000787 env[1142]: time="2023-10-02T19:56:02.000757690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000787 env[1142]: time="2023-10-02T19:56:02.000772688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.000828 env[1142]: time="2023-10-02T19:56:02.000785083Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:56:02.000828 env[1142]: time="2023-10-02T19:56:02.000800204Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:56:02.000828 env[1142]: time="2023-10-02T19:56:02.000811401Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:56:02.000913 env[1142]: time="2023-10-02T19:56:02.000832307Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:56:02.000913 env[1142]: time="2023-10-02T19:56:02.000885191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:56:02.001176 env[1142]: time="2023-10-02T19:56:02.001105901Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:56:02.001176 env[1142]: time="2023-10-02T19:56:02.001187624Z" level=info msg="Connect containerd service" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.001221792Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002202591Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002646407Z" level=info msg="Start subscribing containerd event" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002707844Z" level=info msg="Start recovering state" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002777378Z" level=info msg="Start event monitor" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002816505Z" level=info msg="Start snapshots syncer" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002829189Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.002961730Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.003030645Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:56:02.006428 env[1142]: time="2023-10-02T19:56:02.003072539Z" level=info msg="Start streaming server" Oct 2 19:56:02.007067 env[1142]: time="2023-10-02T19:56:02.007016807Z" level=info msg="containerd successfully booted in 0.049564s" Oct 2 19:56:02.007099 systemd[1]: Started containerd.service. Oct 2 19:56:02.011205 tar[1139]: ./vrf Oct 2 19:56:02.041860 tar[1139]: ./bridge Oct 2 19:56:02.077088 tar[1139]: ./tuning Oct 2 19:56:02.106373 tar[1139]: ./firewall Oct 2 19:56:02.149372 tar[1139]: ./host-device Oct 2 19:56:02.167224 systemd[1]: Finished prepare-critools.service. Oct 2 19:56:02.178557 tar[1139]: ./sbr Oct 2 19:56:02.179695 locksmithd[1171]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:56:02.207323 tar[1139]: ./loopback Oct 2 19:56:02.231252 tar[1139]: ./dhcp Oct 2 19:56:02.297780 tar[1139]: ./ptp Oct 2 19:56:02.326574 tar[1139]: ./ipvlan Oct 2 19:56:02.355035 tar[1139]: ./bandwidth Oct 2 19:56:02.395801 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:56:02.451350 systemd[1]: Created slice system-sshd.slice. Oct 2 19:56:03.087646 systemd-networkd[1046]: eth0: Gained IPv6LL Oct 2 19:56:04.292075 sshd_keygen[1138]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:56:04.312869 systemd[1]: Finished sshd-keygen.service. Oct 2 19:56:04.315025 systemd[1]: Starting issuegen.service... Oct 2 19:56:04.316673 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:46850.service. Oct 2 19:56:04.321027 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:56:04.321207 systemd[1]: Finished issuegen.service. Oct 2 19:56:04.323491 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:56:04.330972 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:56:04.333139 systemd[1]: Started getty@tty1.service. Oct 2 19:56:04.335031 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:56:04.335888 systemd[1]: Reached target getty.target. Oct 2 19:56:04.336556 systemd[1]: Reached target multi-user.target. Oct 2 19:56:04.338591 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:56:04.352353 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:56:04.352533 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:56:04.353338 systemd[1]: Startup finished in 626ms (kernel) + 6.898s (initrd) + 5.870s (userspace) = 13.396s. Oct 2 19:56:04.384249 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 46850 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:04.386776 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:04.399074 systemd[1]: Created slice user-500.slice. Oct 2 19:56:04.400997 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:56:04.402950 systemd-logind[1129]: New session 1 of user core. Oct 2 19:56:04.411753 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:56:04.413339 systemd[1]: Starting user@500.service... Oct 2 19:56:04.417111 (systemd)[1201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:04.491739 systemd[1201]: Queued start job for default target default.target. Oct 2 19:56:04.492271 systemd[1201]: Reached target paths.target. Oct 2 19:56:04.492292 systemd[1201]: Reached target sockets.target. Oct 2 19:56:04.492303 systemd[1201]: Reached target timers.target. Oct 2 19:56:04.492313 systemd[1201]: Reached target basic.target. Oct 2 19:56:04.492369 systemd[1201]: Reached target default.target. Oct 2 19:56:04.492393 systemd[1201]: Startup finished in 68ms. Oct 2 19:56:04.492630 systemd[1]: Started user@500.service. Oct 2 19:56:04.493612 systemd[1]: Started session-1.scope. Oct 2 19:56:04.547224 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:56160.service. Oct 2 19:56:04.590575 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 56160 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:04.592148 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:04.595998 systemd-logind[1129]: New session 2 of user core. Oct 2 19:56:04.596455 systemd[1]: Started session-2.scope. Oct 2 19:56:04.655116 sshd[1210]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:04.659334 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:56174.service. Oct 2 19:56:04.659797 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:56160.service: Deactivated successfully. Oct 2 19:56:04.660553 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:56:04.661105 systemd-logind[1129]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:56:04.662195 systemd-logind[1129]: Removed session 2. Oct 2 19:56:04.700666 sshd[1215]: Accepted publickey for core from 10.0.0.1 port 56174 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:04.702018 sshd[1215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:04.705476 systemd-logind[1129]: New session 3 of user core. Oct 2 19:56:04.706289 systemd[1]: Started session-3.scope. Oct 2 19:56:04.759599 sshd[1215]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:04.763807 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:56174.service: Deactivated successfully. Oct 2 19:56:04.764399 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:56:04.764986 systemd-logind[1129]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:56:04.766059 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:56186.service. Oct 2 19:56:04.766753 systemd-logind[1129]: Removed session 3. Oct 2 19:56:04.806783 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 56186 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:04.809238 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:04.812464 systemd-logind[1129]: New session 4 of user core. Oct 2 19:56:04.813302 systemd[1]: Started session-4.scope. Oct 2 19:56:04.870028 sshd[1223]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:04.874134 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:56186.service: Deactivated successfully. Oct 2 19:56:04.875031 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:56:04.875802 systemd-logind[1129]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:56:04.877443 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:56198.service. Oct 2 19:56:04.878585 systemd-logind[1129]: Removed session 4. Oct 2 19:56:04.936306 sshd[1229]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:04.938130 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:04.941516 systemd-logind[1129]: New session 5 of user core. Oct 2 19:56:04.943511 systemd[1]: Started session-5.scope. Oct 2 19:56:05.003414 sudo[1232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:56:05.003620 sudo[1232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:05.017112 dbus-daemon[1116]: avc: received setenforce notice (enforcing=1) Oct 2 19:56:05.017409 sudo[1232]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:05.019495 sshd[1229]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:05.024426 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:56198.service: Deactivated successfully. Oct 2 19:56:05.025305 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:56:05.026070 systemd-logind[1129]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:56:05.027675 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:56214.service. Oct 2 19:56:05.028749 systemd-logind[1129]: Removed session 5. Oct 2 19:56:05.068477 sshd[1236]: Accepted publickey for core from 10.0.0.1 port 56214 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:05.070612 sshd[1236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:05.074090 systemd-logind[1129]: New session 6 of user core. Oct 2 19:56:05.076028 systemd[1]: Started session-6.scope. Oct 2 19:56:05.132127 sudo[1240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:56:05.132375 sudo[1240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:05.135336 sudo[1240]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:05.140030 sudo[1239]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:56:05.140257 sudo[1239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:05.149019 systemd[1]: Stopping audit-rules.service... Oct 2 19:56:05.149000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:05.150720 auditctl[1243]: No rules Oct 2 19:56:05.151126 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:56:05.151291 systemd[1]: Stopped audit-rules.service. Oct 2 19:56:05.152106 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 19:56:05.152142 kernel: audit: type=1305 audit(1696276565.149:169): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:05.152166 kernel: audit: type=1300 audit(1696276565.149:169): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff73bcff0 a2=420 a3=0 items=0 ppid=1 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.149000 audit[1243]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff73bcff0 a2=420 a3=0 items=0 ppid=1 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.152713 systemd[1]: Starting audit-rules.service... Oct 2 19:56:05.149000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:05.155338 kernel: audit: type=1327 audit(1696276565.149:169): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:05.155365 kernel: audit: type=1131 audit(1696276565.150:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.171191 augenrules[1260]: No rules Oct 2 19:56:05.172137 systemd[1]: Finished audit-rules.service. Oct 2 19:56:05.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.173973 sudo[1239]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:05.173000 audit[1239]: USER_END pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.186560 kernel: audit: type=1130 audit(1696276565.172:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.186651 kernel: audit: type=1106 audit(1696276565.173:172): pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.186672 kernel: audit: type=1104 audit(1696276565.173:173): pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.173000 audit[1239]: CRED_DISP pid=1239 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.187293 sshd[1236]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:05.190015 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:56214.service: Deactivated successfully. Oct 2 19:56:05.190756 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:56:05.187000 audit[1236]: USER_END pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.191247 systemd-logind[1129]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:56:05.195121 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:56224.service. Oct 2 19:56:05.195844 systemd-logind[1129]: Removed session 6. Oct 2 19:56:05.206772 kernel: audit: type=1106 audit(1696276565.187:174): pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.206854 kernel: audit: type=1104 audit(1696276565.187:175): pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.187000 audit[1236]: CRED_DISP pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.10:22-10.0.0.1:56214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.218635 kernel: audit: type=1131 audit(1696276565.189:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.10:22-10.0.0.1:56214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.10:22-10.0.0.1:56224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.247000 audit[1266]: USER_ACCT pid=1266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.248893 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 56224 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:05.248000 audit[1266]: CRED_ACQ pid=1266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.248000 audit[1266]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff00dd9d0 a2=3 a3=1 items=0 ppid=1 pid=1266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:05.248000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:56:05.250361 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:05.254234 systemd-logind[1129]: New session 7 of user core. Oct 2 19:56:05.254715 systemd[1]: Started session-7.scope. Oct 2 19:56:05.256000 audit[1266]: USER_START pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.258000 audit[1268]: CRED_ACQ pid=1268 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:05.307000 audit[1269]: USER_ACCT pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.307673 sudo[1269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:56:05.307000 audit[1269]: CRED_REFR pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.307888 sudo[1269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:05.309000 audit[1269]: USER_START pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:05.843967 systemd[1]: Reloading. Oct 2 19:56:05.890727 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2023-10-02T19:56:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:05.890756 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2023-10-02T19:56:05Z" level=info msg="torcx already run" Oct 2 19:56:05.958572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:05.958593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:05.975997 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit: BPF prog-id=37 op=LOAD Oct 2 19:56:06.022000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit: BPF prog-id=38 op=LOAD Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.022000 audit: BPF prog-id=39 op=LOAD Oct 2 19:56:06.022000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:56:06.022000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit: BPF prog-id=40 op=LOAD Oct 2 19:56:06.024000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit: BPF prog-id=41 op=LOAD Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.024000 audit: BPF prog-id=42 op=LOAD Oct 2 19:56:06.024000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:56:06.024000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit: BPF prog-id=43 op=LOAD Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.025000 audit: BPF prog-id=44 op=LOAD Oct 2 19:56:06.025000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:56:06.025000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit: BPF prog-id=45 op=LOAD Oct 2 19:56:06.026000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit: BPF prog-id=46 op=LOAD Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.026000 audit: BPF prog-id=47 op=LOAD Oct 2 19:56:06.026000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:56:06.026000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit: BPF prog-id=48 op=LOAD Oct 2 19:56:06.027000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.027000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit: BPF prog-id=49 op=LOAD Oct 2 19:56:06.028000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.028000 audit: BPF prog-id=50 op=LOAD Oct 2 19:56:06.028000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.029000 audit: BPF prog-id=51 op=LOAD Oct 2 19:56:06.029000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:56:06.037912 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:56:06.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.226251 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:56:06.226846 systemd[1]: Reached target network-online.target. Oct 2 19:56:06.228619 systemd[1]: Started kubelet.service. Oct 2 19:56:06.243414 systemd[1]: Starting coreos-metadata.service... Oct 2 19:56:06.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.252771 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:56:06.252955 systemd[1]: Finished coreos-metadata.service. Oct 2 19:56:06.420611 kubelet[1337]: E1002 19:56:06.420550 1337 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:56:06.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:56:06.423140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:56:06.423285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:56:06.568233 systemd[1]: Stopped kubelet.service. Oct 2 19:56:06.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.586529 systemd[1]: Reloading. Oct 2 19:56:06.646789 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2023-10-02T19:56:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:06.646817 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2023-10-02T19:56:06Z" level=info msg="torcx already run" Oct 2 19:56:06.715475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:06.715634 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:06.735957 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit: BPF prog-id=52 op=LOAD Oct 2 19:56:06.782000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit: BPF prog-id=53 op=LOAD Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.783000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.783000 audit: BPF prog-id=54 op=LOAD Oct 2 19:56:06.783000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:56:06.783000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.784000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit: BPF prog-id=55 op=LOAD Oct 2 19:56:06.785000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit: BPF prog-id=56 op=LOAD Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.786000 audit: BPF prog-id=57 op=LOAD Oct 2 19:56:06.786000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:56:06.786000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit: BPF prog-id=58 op=LOAD Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit: BPF prog-id=59 op=LOAD Oct 2 19:56:06.788000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:56:06.788000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.788000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit: BPF prog-id=60 op=LOAD Oct 2 19:56:06.789000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.789000 audit: BPF prog-id=61 op=LOAD Oct 2 19:56:06.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.790000 audit: BPF prog-id=62 op=LOAD Oct 2 19:56:06.790000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:56:06.790000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.791000 audit: BPF prog-id=63 op=LOAD Oct 2 19:56:06.792000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.792000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit: BPF prog-id=64 op=LOAD Oct 2 19:56:06.793000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.793000 audit: BPF prog-id=65 op=LOAD Oct 2 19:56:06.793000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:06.795000 audit: BPF prog-id=66 op=LOAD Oct 2 19:56:06.795000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:56:06.815947 systemd[1]: Started kubelet.service. Oct 2 19:56:06.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.865838 kubelet[1441]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:56:06.865838 kubelet[1441]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:06.865838 kubelet[1441]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:06.866241 kubelet[1441]: I1002 19:56:06.865864 1441 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:56:06.868371 kubelet[1441]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:56:06.868499 kubelet[1441]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:06.868544 kubelet[1441]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:07.535296 kubelet[1441]: I1002 19:56:07.535252 1441 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:56:07.535296 kubelet[1441]: I1002 19:56:07.535284 1441 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:56:07.536050 kubelet[1441]: I1002 19:56:07.536014 1441 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:56:07.542031 kubelet[1441]: I1002 19:56:07.542003 1441 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:56:07.544793 kubelet[1441]: W1002 19:56:07.544776 1441 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:56:07.545682 kubelet[1441]: I1002 19:56:07.545655 1441 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:56:07.545883 kubelet[1441]: I1002 19:56:07.545872 1441 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:56:07.545970 kubelet[1441]: I1002 19:56:07.545958 1441 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:56:07.546110 kubelet[1441]: I1002 19:56:07.546099 1441 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:56:07.546141 kubelet[1441]: I1002 19:56:07.546112 1441 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:56:07.546291 kubelet[1441]: I1002 19:56:07.546277 1441 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:07.550552 kubelet[1441]: I1002 19:56:07.550509 1441 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:56:07.550642 kubelet[1441]: I1002 19:56:07.550601 1441 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:56:07.550642 kubelet[1441]: I1002 19:56:07.550627 1441 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:56:07.550709 kubelet[1441]: I1002 19:56:07.550661 1441 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:56:07.550789 kubelet[1441]: E1002 19:56:07.550762 1441 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:07.550825 kubelet[1441]: E1002 19:56:07.550802 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:07.552557 kubelet[1441]: I1002 19:56:07.552533 1441 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:56:07.553692 kubelet[1441]: W1002 19:56:07.553668 1441 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:56:07.554605 kubelet[1441]: I1002 19:56:07.554581 1441 server.go:1175] "Started kubelet" Oct 2 19:56:07.555545 kubelet[1441]: I1002 19:56:07.555520 1441 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:56:07.555000 audit[1441]: AVC avc: denied { mac_admin } for pid=1441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.555000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:07.555000 audit[1441]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400069aed0 a1=4000c36270 a2=400069aea0 a3=25 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.555000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:07.555000 audit[1441]: AVC avc: denied { mac_admin } for pid=1441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.555000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:07.555000 audit[1441]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c38140 a1=4000c36288 a2=400069af60 a3=25 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.555000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:07.555983 kubelet[1441]: I1002 19:56:07.555965 1441 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:56:07.556275 kubelet[1441]: I1002 19:56:07.556259 1441 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:56:07.558477 kubelet[1441]: E1002 19:56:07.558449 1441 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:56:07.558477 kubelet[1441]: E1002 19:56:07.558479 1441 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:56:07.561189 kubelet[1441]: E1002 19:56:07.560569 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d09e880bf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 554556095, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 554556095, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.561298 kubelet[1441]: W1002 19:56:07.561246 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:07.561298 kubelet[1441]: E1002 19:56:07.561288 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:07.561372 kubelet[1441]: W1002 19:56:07.561319 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:07.561372 kubelet[1441]: E1002 19:56:07.561328 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:07.561747 kubelet[1441]: E1002 19:56:07.561615 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0a2433e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 558468578, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 558468578, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.561904 kubelet[1441]: I1002 19:56:07.561884 1441 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:56:07.563474 kubelet[1441]: I1002 19:56:07.562885 1441 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:56:07.563737 kubelet[1441]: I1002 19:56:07.563696 1441 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:56:07.564174 kubelet[1441]: I1002 19:56:07.564102 1441 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:56:07.564644 kubelet[1441]: E1002 19:56:07.564615 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:07.571191 kubelet[1441]: E1002 19:56:07.571134 1441 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:07.571268 kubelet[1441]: W1002 19:56:07.571214 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:07.571268 kubelet[1441]: E1002 19:56:07.571247 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:07.588432 kubelet[1441]: I1002 19:56:07.588399 1441 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:56:07.588642 kubelet[1441]: I1002 19:56:07.588629 1441 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:56:07.588716 kubelet[1441]: I1002 19:56:07.588706 1441 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:07.589903 kubelet[1441]: E1002 19:56:07.589798 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.588000 audit[1459]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.588000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffedbf0130 a2=0 a3=1 items=0 ppid=1441 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:07.590453 kubelet[1441]: I1002 19:56:07.590431 1441 policy_none.go:49] "None policy: Start" Oct 2 19:56:07.590890 kubelet[1441]: E1002 19:56:07.590767 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.591291 kubelet[1441]: I1002 19:56:07.591264 1441 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:56:07.591355 kubelet[1441]: I1002 19:56:07.591293 1441 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:56:07.590000 audit[1463]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.590000 audit[1463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffa147770 a2=0 a3=1 items=0 ppid=1441 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:07.592629 kubelet[1441]: E1002 19:56:07.592534 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.596285 systemd[1]: Created slice kubepods.slice. Oct 2 19:56:07.599960 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:56:07.592000 audit[1465]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.592000 audit[1465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe2813f60 a2=0 a3=1 items=0 ppid=1441 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:07.610799 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:56:07.611799 kubelet[1441]: I1002 19:56:07.611771 1441 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:56:07.611853 kubelet[1441]: I1002 19:56:07.611833 1441 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:56:07.611000 audit[1441]: AVC avc: denied { mac_admin } for pid=1441 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.611000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:07.611000 audit[1441]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b616b0 a1=40010b3938 a2=4000b61680 a3=25 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.611000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:07.612689 kubelet[1441]: I1002 19:56:07.612603 1441 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:56:07.612753 kubelet[1441]: E1002 19:56:07.612631 1441 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.10\" not found" Oct 2 19:56:07.614000 audit[1470]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.614000 audit[1470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe4baa970 a2=0 a3=1 items=0 ppid=1441 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:07.621874 kubelet[1441]: E1002 19:56:07.621787 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0dd81785", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 620589445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 620589445, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.649000 audit[1475]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.649000 audit[1475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe77f8b90 a2=0 a3=1 items=0 ppid=1441 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:56:07.650000 audit[1476]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.650000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffeab2c560 a2=0 a3=1 items=0 ppid=1441 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:56:07.655000 audit[1479]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.655000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcb4c64d0 a2=0 a3=1 items=0 ppid=1441 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:56:07.659000 audit[1482]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.659000 audit[1482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffd2864220 a2=0 a3=1 items=0 ppid=1441 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:56:07.660000 audit[1483]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.660000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3962660 a2=0 a3=1 items=0 ppid=1441 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:56:07.661000 audit[1484]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.661000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd719ff20 a2=0 a3=1 items=0 ppid=1441 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:07.663999 kubelet[1441]: E1002 19:56:07.663977 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:07.664628 kubelet[1441]: I1002 19:56:07.664611 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:07.664000 audit[1486]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.664000 audit[1486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff3a08e50 a2=0 a3=1 items=0 ppid=1441 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:56:07.665903 kubelet[1441]: E1002 19:56:07.665881 1441 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.10" Oct 2 19:56:07.665939 kubelet[1441]: E1002 19:56:07.665877 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 664570584, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be271e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.666899 kubelet[1441]: E1002 19:56:07.666840 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 664582135, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2a8b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.667724 kubelet[1441]: E1002 19:56:07.667660 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 664585145, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2b896" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.667000 audit[1488]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.667000 audit[1488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdbbc7550 a2=0 a3=1 items=0 ppid=1441 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:07.686000 audit[1491]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.686000 audit[1491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd54297b0 a2=0 a3=1 items=0 ppid=1441 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:56:07.688000 audit[1493]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.688000 audit[1493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff2a662e0 a2=0 a3=1 items=0 ppid=1441 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:56:07.695000 audit[1496]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.695000 audit[1496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffff9968ac0 a2=0 a3=1 items=0 ppid=1441 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:56:07.697168 kubelet[1441]: I1002 19:56:07.697134 1441 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:56:07.696000 audit[1497]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.696000 audit[1497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe421e7d0 a2=0 a3=1 items=0 ppid=1441 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:07.697000 audit[1498]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.697000 audit[1498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd81f2070 a2=0 a3=1 items=0 ppid=1441 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:07.698000 audit[1499]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.698000 audit[1499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc8b30400 a2=0 a3=1 items=0 ppid=1441 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:56:07.698000 audit[1500]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.698000 audit[1500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff57b63e0 a2=0 a3=1 items=0 ppid=1441 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:07.699000 audit[1502]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:07.699000 audit[1502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2a38700 a2=0 a3=1 items=0 ppid=1441 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:07.700000 audit[1503]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.700000 audit[1503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff5cea470 a2=0 a3=1 items=0 ppid=1441 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:56:07.701000 audit[1504]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.701000 audit[1504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffd7b72400 a2=0 a3=1 items=0 ppid=1441 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:07.704000 audit[1506]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.704000 audit[1506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffa200490 a2=0 a3=1 items=0 ppid=1441 pid=1506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.704000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:56:07.705000 audit[1507]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.705000 audit[1507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff28ce9b0 a2=0 a3=1 items=0 ppid=1441 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:56:07.706000 audit[1508]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.706000 audit[1508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeef51960 a2=0 a3=1 items=0 ppid=1441 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:07.708000 audit[1510]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.708000 audit[1510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffbbe03c0 a2=0 a3=1 items=0 ppid=1441 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:56:07.710000 audit[1512]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.710000 audit[1512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdcb66eb0 a2=0 a3=1 items=0 ppid=1441 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:07.713000 audit[1514]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.713000 audit[1514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffff898f40 a2=0 a3=1 items=0 ppid=1441 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:56:07.715000 audit[1516]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.715000 audit[1516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe912e560 a2=0 a3=1 items=0 ppid=1441 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:56:07.718000 audit[1518]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.718000 audit[1518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffdaf46830 a2=0 a3=1 items=0 ppid=1441 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:56:07.720185 kubelet[1441]: I1002 19:56:07.720148 1441 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:56:07.720239 kubelet[1441]: I1002 19:56:07.720200 1441 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:56:07.720239 kubelet[1441]: I1002 19:56:07.720218 1441 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:56:07.720292 kubelet[1441]: E1002 19:56:07.720273 1441 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:56:07.721860 kubelet[1441]: W1002 19:56:07.721829 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:07.721860 kubelet[1441]: E1002 19:56:07.721860 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:07.721000 audit[1519]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.721000 audit[1519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3bd14e0 a2=0 a3=1 items=0 ppid=1441 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:07.722000 audit[1520]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.722000 audit[1520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc727e4d0 a2=0 a3=1 items=0 ppid=1441 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:07.723000 audit[1521]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:07.723000 audit[1521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5bec650 a2=0 a3=1 items=0 ppid=1441 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:07.764174 kubelet[1441]: E1002 19:56:07.764135 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:07.772682 kubelet[1441]: E1002 19:56:07.772636 1441 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:07.865466 kubelet[1441]: E1002 19:56:07.865358 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:07.869696 kubelet[1441]: I1002 19:56:07.869603 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:07.871866 kubelet[1441]: E1002 19:56:07.871837 1441 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.10" Oct 2 19:56:07.872050 kubelet[1441]: E1002 19:56:07.871951 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 869556132, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be271e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.872999 kubelet[1441]: E1002 19:56:07.872931 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 869568782, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2a8b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.956599 kubelet[1441]: E1002 19:56:07.956500 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 869576307, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2b896" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:07.965861 kubelet[1441]: E1002 19:56:07.965837 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.066191 kubelet[1441]: E1002 19:56:08.066156 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.166934 kubelet[1441]: E1002 19:56:08.166834 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.174179 kubelet[1441]: E1002 19:56:08.174126 1441 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:08.267700 kubelet[1441]: E1002 19:56:08.267664 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.273507 kubelet[1441]: I1002 19:56:08.273488 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:08.274876 kubelet[1441]: E1002 19:56:08.274851 1441 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.10" Oct 2 19:56:08.275157 kubelet[1441]: E1002 19:56:08.275064 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 8, 273450300, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be271e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:08.357478 kubelet[1441]: E1002 19:56:08.357386 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 8, 273460691, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2a8b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:08.368834 kubelet[1441]: E1002 19:56:08.368796 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.469518 kubelet[1441]: E1002 19:56:08.469400 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.551171 kubelet[1441]: E1002 19:56:08.551113 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:08.557972 kubelet[1441]: W1002 19:56:08.557940 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:08.557972 kubelet[1441]: E1002 19:56:08.557973 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:08.558077 kubelet[1441]: E1002 19:56:08.557925 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 8, 273463938, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2b896" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:08.569632 kubelet[1441]: E1002 19:56:08.569594 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.670786 kubelet[1441]: E1002 19:56:08.670710 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.704531 kubelet[1441]: W1002 19:56:08.704482 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:08.704531 kubelet[1441]: E1002 19:56:08.704514 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:08.740183 kubelet[1441]: W1002 19:56:08.740081 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:08.740183 kubelet[1441]: E1002 19:56:08.740114 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:08.771808 kubelet[1441]: E1002 19:56:08.771774 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.872625 kubelet[1441]: E1002 19:56:08.872602 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.924673 kubelet[1441]: W1002 19:56:08.924641 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:08.924862 kubelet[1441]: E1002 19:56:08.924849 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:08.973719 kubelet[1441]: E1002 19:56:08.973692 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:08.975882 kubelet[1441]: E1002 19:56:08.975860 1441 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:09.074397 kubelet[1441]: E1002 19:56:09.074285 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.076089 kubelet[1441]: I1002 19:56:09.076073 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:09.077471 kubelet[1441]: E1002 19:56:09.077450 1441 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.10" Oct 2 19:56:09.077545 kubelet[1441]: E1002 19:56:09.077398 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 76035800, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be271e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.078373 kubelet[1441]: E1002 19:56:09.078318 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 76047429, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2a8b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.156824 kubelet[1441]: E1002 19:56:09.156734 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 76050670, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2b896" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.175217 kubelet[1441]: E1002 19:56:09.175188 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.275285 kubelet[1441]: E1002 19:56:09.275245 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.376127 kubelet[1441]: E1002 19:56:09.376024 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.476875 kubelet[1441]: E1002 19:56:09.476832 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.552155 kubelet[1441]: E1002 19:56:09.552109 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:09.578186 kubelet[1441]: E1002 19:56:09.578151 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.679001 kubelet[1441]: E1002 19:56:09.678901 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.779800 kubelet[1441]: E1002 19:56:09.779742 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.880576 kubelet[1441]: E1002 19:56:09.880523 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:09.981534 kubelet[1441]: E1002 19:56:09.981387 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.082026 kubelet[1441]: E1002 19:56:10.081974 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.182224 kubelet[1441]: E1002 19:56:10.182162 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.282417 kubelet[1441]: E1002 19:56:10.282292 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.382824 kubelet[1441]: E1002 19:56:10.382773 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.483613 kubelet[1441]: E1002 19:56:10.483563 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.553044 kubelet[1441]: E1002 19:56:10.552938 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:10.577270 kubelet[1441]: E1002 19:56:10.577223 1441 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:10.584670 kubelet[1441]: E1002 19:56:10.584629 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.678723 kubelet[1441]: I1002 19:56:10.678692 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:10.679995 kubelet[1441]: E1002 19:56:10.679965 1441 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.10" Oct 2 19:56:10.680183 kubelet[1441]: E1002 19:56:10.680099 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 10, 678625604, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be271e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.681379 kubelet[1441]: E1002 19:56:10.681309 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 10, 678649511, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2a8b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.682844 kubelet[1441]: E1002 19:56:10.682790 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 10, 678653031, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2b896" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.684919 kubelet[1441]: E1002 19:56:10.684899 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.779780 kubelet[1441]: W1002 19:56:10.779751 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:10.779941 kubelet[1441]: E1002 19:56:10.779930 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:10.786001 kubelet[1441]: E1002 19:56:10.785958 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.886756 kubelet[1441]: E1002 19:56:10.886647 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:10.987222 kubelet[1441]: E1002 19:56:10.987182 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.088150 kubelet[1441]: E1002 19:56:11.088100 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.188690 kubelet[1441]: E1002 19:56:11.188566 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.289212 kubelet[1441]: E1002 19:56:11.289154 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.352221 kubelet[1441]: W1002 19:56:11.352181 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:11.352221 kubelet[1441]: E1002 19:56:11.352212 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:11.389757 kubelet[1441]: E1002 19:56:11.389713 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.407904 kubelet[1441]: W1002 19:56:11.407868 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:11.407904 kubelet[1441]: E1002 19:56:11.407903 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:11.489921 kubelet[1441]: E1002 19:56:11.489792 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.553668 kubelet[1441]: E1002 19:56:11.553608 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:11.590943 kubelet[1441]: E1002 19:56:11.590876 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.693248 kubelet[1441]: E1002 19:56:11.693174 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.708841 kubelet[1441]: W1002 19:56:11.708803 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:11.708841 kubelet[1441]: E1002 19:56:11.708836 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:11.793953 kubelet[1441]: E1002 19:56:11.793832 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.894243 kubelet[1441]: E1002 19:56:11.894173 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:11.994418 kubelet[1441]: E1002 19:56:11.994353 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.094814 kubelet[1441]: E1002 19:56:12.094687 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.194817 kubelet[1441]: E1002 19:56:12.194757 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.294900 kubelet[1441]: E1002 19:56:12.294836 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.395258 kubelet[1441]: E1002 19:56:12.395118 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.496204 kubelet[1441]: E1002 19:56:12.496125 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.553965 kubelet[1441]: E1002 19:56:12.553904 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:12.597233 kubelet[1441]: E1002 19:56:12.597176 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.612947 kubelet[1441]: E1002 19:56:12.612921 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:12.698327 kubelet[1441]: E1002 19:56:12.698198 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.798712 kubelet[1441]: E1002 19:56:12.798654 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:12.899602 kubelet[1441]: E1002 19:56:12.899549 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.000300 kubelet[1441]: E1002 19:56:13.000178 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.100945 kubelet[1441]: E1002 19:56:13.100876 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.202035 kubelet[1441]: E1002 19:56:13.201962 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.302353 kubelet[1441]: E1002 19:56:13.302192 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.403322 kubelet[1441]: E1002 19:56:13.403256 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.503690 kubelet[1441]: E1002 19:56:13.503615 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.554434 kubelet[1441]: E1002 19:56:13.554316 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:13.604377 kubelet[1441]: E1002 19:56:13.604324 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.705075 kubelet[1441]: E1002 19:56:13.705009 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.779117 kubelet[1441]: E1002 19:56:13.779048 1441 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:13.805843 kubelet[1441]: E1002 19:56:13.805711 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:13.881252 kubelet[1441]: I1002 19:56:13.881223 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:13.882691 kubelet[1441]: E1002 19:56:13.882610 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be271e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587713506, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 881187658, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be271e2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.882920 kubelet[1441]: E1002 19:56:13.882895 1441 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.10" Oct 2 19:56:13.883891 kubelet[1441]: E1002 19:56:13.883828 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2a8b2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587727538, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 881195518, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2a8b2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.884788 kubelet[1441]: E1002 19:56:13.884719 1441 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.10.178a628d0be2b896", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.10", UID:"10.0.0.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.10"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 7, 587731606, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 13, 881198097, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.10.178a628d0be2b896" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:13.906319 kubelet[1441]: E1002 19:56:13.906263 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.006414 kubelet[1441]: E1002 19:56:14.006364 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.107167 kubelet[1441]: E1002 19:56:14.106925 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.207132 kubelet[1441]: E1002 19:56:14.207070 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.307255 kubelet[1441]: E1002 19:56:14.307196 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.408417 kubelet[1441]: E1002 19:56:14.408275 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.508495 kubelet[1441]: E1002 19:56:14.508427 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.555608 kubelet[1441]: E1002 19:56:14.555132 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:14.609681 kubelet[1441]: E1002 19:56:14.609595 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.710117 kubelet[1441]: E1002 19:56:14.709976 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.810388 kubelet[1441]: E1002 19:56:14.810316 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:14.910463 kubelet[1441]: E1002 19:56:14.910394 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.011523 kubelet[1441]: E1002 19:56:15.011397 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.071419 kubelet[1441]: W1002 19:56:15.071360 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:15.071419 kubelet[1441]: E1002 19:56:15.071396 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:15.112070 kubelet[1441]: E1002 19:56:15.112010 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.215334 kubelet[1441]: E1002 19:56:15.215245 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.315498 kubelet[1441]: E1002 19:56:15.315370 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.416598 kubelet[1441]: E1002 19:56:15.416479 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.517404 kubelet[1441]: E1002 19:56:15.517309 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.555928 kubelet[1441]: E1002 19:56:15.555816 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:15.618473 kubelet[1441]: E1002 19:56:15.618346 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.719355 kubelet[1441]: E1002 19:56:15.719286 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.819458 kubelet[1441]: E1002 19:56:15.819365 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:15.920487 kubelet[1441]: E1002 19:56:15.920353 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.021291 kubelet[1441]: E1002 19:56:16.021225 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.121645 kubelet[1441]: E1002 19:56:16.121578 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.222712 kubelet[1441]: E1002 19:56:16.222577 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.323996 kubelet[1441]: E1002 19:56:16.323935 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.424320 kubelet[1441]: E1002 19:56:16.424255 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.525447 kubelet[1441]: E1002 19:56:16.525316 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.556969 kubelet[1441]: E1002 19:56:16.556885 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:16.626116 kubelet[1441]: E1002 19:56:16.626066 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.727221 kubelet[1441]: E1002 19:56:16.727156 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.828330 kubelet[1441]: E1002 19:56:16.828187 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:16.929429 kubelet[1441]: E1002 19:56:16.929216 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.017549 kubelet[1441]: W1002 19:56:17.017459 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:17.017549 kubelet[1441]: E1002 19:56:17.017499 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:17.030029 kubelet[1441]: E1002 19:56:17.029981 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.130503 kubelet[1441]: E1002 19:56:17.130376 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.231030 kubelet[1441]: E1002 19:56:17.230964 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.291182 kubelet[1441]: W1002 19:56:17.291130 1441 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:17.291182 kubelet[1441]: E1002 19:56:17.291172 1441 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:17.331585 kubelet[1441]: E1002 19:56:17.331531 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.432750 kubelet[1441]: E1002 19:56:17.432623 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.533737 kubelet[1441]: E1002 19:56:17.533678 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.539924 kubelet[1441]: I1002 19:56:17.539886 1441 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:56:17.557420 kubelet[1441]: E1002 19:56:17.557363 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:17.613254 kubelet[1441]: E1002 19:56:17.613010 1441 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.10\" not found" Oct 2 19:56:17.614527 kubelet[1441]: E1002 19:56:17.613652 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:17.633904 kubelet[1441]: E1002 19:56:17.633828 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.734709 kubelet[1441]: E1002 19:56:17.734591 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.835391 kubelet[1441]: E1002 19:56:17.835339 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:17.924708 kubelet[1441]: E1002 19:56:17.924658 1441 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.10" not found Oct 2 19:56:17.936065 kubelet[1441]: E1002 19:56:17.936035 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.036718 kubelet[1441]: E1002 19:56:18.036626 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.137704 kubelet[1441]: E1002 19:56:18.137669 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.237952 kubelet[1441]: E1002 19:56:18.237907 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.338932 kubelet[1441]: E1002 19:56:18.338797 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.438991 kubelet[1441]: E1002 19:56:18.438949 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.540880 kubelet[1441]: E1002 19:56:18.540851 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.558286 kubelet[1441]: E1002 19:56:18.558246 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:18.641696 kubelet[1441]: E1002 19:56:18.641582 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.742625 kubelet[1441]: E1002 19:56:18.742559 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.842895 kubelet[1441]: E1002 19:56:18.842852 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:18.943873 kubelet[1441]: E1002 19:56:18.943761 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.044387 kubelet[1441]: E1002 19:56:19.044324 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.145441 kubelet[1441]: E1002 19:56:19.145404 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.165555 kubelet[1441]: E1002 19:56:19.165468 1441 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.10" not found Oct 2 19:56:19.246709 kubelet[1441]: E1002 19:56:19.246517 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.347022 kubelet[1441]: E1002 19:56:19.346980 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.448028 kubelet[1441]: E1002 19:56:19.447992 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.549133 kubelet[1441]: E1002 19:56:19.549032 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.559299 kubelet[1441]: E1002 19:56:19.559270 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:19.650009 kubelet[1441]: E1002 19:56:19.649961 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.750713 kubelet[1441]: E1002 19:56:19.750668 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.851046 kubelet[1441]: E1002 19:56:19.850935 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:19.951794 kubelet[1441]: E1002 19:56:19.951724 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.052242 kubelet[1441]: E1002 19:56:20.052208 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.153343 kubelet[1441]: E1002 19:56:20.153231 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.183422 kubelet[1441]: E1002 19:56:20.183391 1441 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.10\" not found" node="10.0.0.10" Oct 2 19:56:20.253972 kubelet[1441]: E1002 19:56:20.253926 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.284014 kubelet[1441]: I1002 19:56:20.283984 1441 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.10" Oct 2 19:56:20.354660 kubelet[1441]: E1002 19:56:20.354595 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.455820 kubelet[1441]: E1002 19:56:20.455685 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.556733 kubelet[1441]: E1002 19:56:20.556696 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.559951 kubelet[1441]: E1002 19:56:20.559935 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:20.566684 kubelet[1441]: I1002 19:56:20.566617 1441 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.10" Oct 2 19:56:20.657469 kubelet[1441]: E1002 19:56:20.657398 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.757929 kubelet[1441]: E1002 19:56:20.757839 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.833517 sudo[1269]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:20.832000 audit[1269]: USER_END pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:20.836115 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:56:20.836200 kernel: audit: type=1106 audit(1696276580.832:574): pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:20.836230 kernel: audit: type=1104 audit(1696276580.832:575): pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:20.832000 audit[1269]: CRED_DISP pid=1269 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:20.836609 sshd[1266]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:20.837000 audit[1266]: USER_END pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:20.839909 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:56224.service: Deactivated successfully. Oct 2 19:56:20.840628 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:56:20.837000 audit[1266]: CRED_DISP pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:20.842652 systemd-logind[1129]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:56:20.843877 kernel: audit: type=1106 audit(1696276580.837:576): pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:20.843906 kernel: audit: type=1104 audit(1696276580.837:577): pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:20.843925 kernel: audit: type=1131 audit(1696276580.837:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.10:22-10.0.0.1:56224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:20.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.10:22-10.0.0.1:56224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:20.843454 systemd-logind[1129]: Removed session 7. Oct 2 19:56:20.858702 kubelet[1441]: E1002 19:56:20.858649 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:20.959399 kubelet[1441]: E1002 19:56:20.959340 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.060233 kubelet[1441]: E1002 19:56:21.060105 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.161028 kubelet[1441]: E1002 19:56:21.160966 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.261522 kubelet[1441]: E1002 19:56:21.261481 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.362616 kubelet[1441]: E1002 19:56:21.362509 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.463689 kubelet[1441]: E1002 19:56:21.463596 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.560823 kubelet[1441]: E1002 19:56:21.560781 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:21.563999 kubelet[1441]: E1002 19:56:21.563971 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.665045 kubelet[1441]: E1002 19:56:21.664934 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.765346 kubelet[1441]: E1002 19:56:21.765293 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.866190 kubelet[1441]: E1002 19:56:21.866129 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:21.967197 kubelet[1441]: E1002 19:56:21.967071 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.068127 kubelet[1441]: E1002 19:56:22.068087 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.169015 kubelet[1441]: E1002 19:56:22.168975 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.269997 kubelet[1441]: E1002 19:56:22.269882 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.370723 kubelet[1441]: E1002 19:56:22.370678 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.471701 kubelet[1441]: E1002 19:56:22.471658 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.561605 kubelet[1441]: E1002 19:56:22.561479 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:22.571920 kubelet[1441]: E1002 19:56:22.571891 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.614707 kubelet[1441]: E1002 19:56:22.614671 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:22.672643 kubelet[1441]: E1002 19:56:22.672600 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.773713 kubelet[1441]: E1002 19:56:22.773673 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.874575 kubelet[1441]: E1002 19:56:22.874457 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:22.975548 kubelet[1441]: E1002 19:56:22.975498 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.076180 kubelet[1441]: E1002 19:56:23.076103 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.176985 kubelet[1441]: E1002 19:56:23.176870 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.277726 kubelet[1441]: E1002 19:56:23.277667 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.378554 kubelet[1441]: E1002 19:56:23.378477 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.479265 kubelet[1441]: E1002 19:56:23.479115 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.561644 kubelet[1441]: E1002 19:56:23.561580 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:23.580049 kubelet[1441]: E1002 19:56:23.580011 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.680986 kubelet[1441]: E1002 19:56:23.680944 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.781231 kubelet[1441]: E1002 19:56:23.781079 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.881809 kubelet[1441]: E1002 19:56:23.881747 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:23.982760 kubelet[1441]: E1002 19:56:23.982718 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.083891 kubelet[1441]: E1002 19:56:24.083782 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.184606 kubelet[1441]: E1002 19:56:24.184562 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.285338 kubelet[1441]: E1002 19:56:24.285294 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.386119 kubelet[1441]: E1002 19:56:24.386007 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.486875 kubelet[1441]: E1002 19:56:24.486834 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.562517 kubelet[1441]: E1002 19:56:24.562459 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:24.587134 kubelet[1441]: E1002 19:56:24.587068 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.688189 kubelet[1441]: E1002 19:56:24.688034 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.789083 kubelet[1441]: E1002 19:56:24.789016 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.889736 kubelet[1441]: E1002 19:56:24.889690 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:24.990638 kubelet[1441]: E1002 19:56:24.990538 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.091526 kubelet[1441]: E1002 19:56:25.091482 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.192358 kubelet[1441]: E1002 19:56:25.192315 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.293296 kubelet[1441]: E1002 19:56:25.293187 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.394069 kubelet[1441]: E1002 19:56:25.394025 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.494814 kubelet[1441]: E1002 19:56:25.494774 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.563397 kubelet[1441]: E1002 19:56:25.563271 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:25.595850 kubelet[1441]: E1002 19:56:25.595808 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.696772 kubelet[1441]: E1002 19:56:25.696731 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.797663 kubelet[1441]: E1002 19:56:25.797593 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.898529 kubelet[1441]: E1002 19:56:25.898397 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:25.999104 kubelet[1441]: E1002 19:56:25.999057 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.100043 kubelet[1441]: E1002 19:56:26.099966 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.200787 kubelet[1441]: E1002 19:56:26.200631 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.301424 kubelet[1441]: E1002 19:56:26.301365 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.402156 kubelet[1441]: E1002 19:56:26.402080 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.502904 kubelet[1441]: E1002 19:56:26.502772 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.563459 kubelet[1441]: E1002 19:56:26.563403 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:26.603072 kubelet[1441]: E1002 19:56:26.603011 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.703817 kubelet[1441]: E1002 19:56:26.703760 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.804899 kubelet[1441]: E1002 19:56:26.804731 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:26.905793 kubelet[1441]: E1002 19:56:26.905729 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:27.006419 kubelet[1441]: E1002 19:56:27.006352 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:27.107391 kubelet[1441]: E1002 19:56:27.107231 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:27.207953 kubelet[1441]: E1002 19:56:27.207888 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:27.308576 kubelet[1441]: E1002 19:56:27.308515 1441 kubelet.go:2448] "Error getting node" err="node \"10.0.0.10\" not found" Oct 2 19:56:27.409895 kubelet[1441]: I1002 19:56:27.409555 1441 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:56:27.410005 env[1142]: time="2023-10-02T19:56:27.409963801Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:56:27.410263 kubelet[1441]: I1002 19:56:27.410244 1441 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:56:27.410637 kubelet[1441]: E1002 19:56:27.410620 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:27.551571 kubelet[1441]: E1002 19:56:27.551519 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.563661 kubelet[1441]: I1002 19:56:27.563634 1441 apiserver.go:52] "Watching apiserver" Oct 2 19:56:27.563842 kubelet[1441]: E1002 19:56:27.563669 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.566625 kubelet[1441]: I1002 19:56:27.566604 1441 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:56:27.566816 kubelet[1441]: I1002 19:56:27.566801 1441 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:56:27.571241 systemd[1]: Created slice kubepods-besteffort-pod9dbf2e76_0a84_4002_b2bf_724100123f89.slice. Oct 2 19:56:27.584037 systemd[1]: Created slice kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice. Oct 2 19:56:27.616031 kubelet[1441]: E1002 19:56:27.615990 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:27.695164 kubelet[1441]: I1002 19:56:27.695046 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dbf2e76-0a84-4002-b2bf-724100123f89-xtables-lock\") pod \"kube-proxy-7xx4g\" (UID: \"9dbf2e76-0a84-4002-b2bf-724100123f89\") " pod="kube-system/kube-proxy-7xx4g" Oct 2 19:56:27.695347 kubelet[1441]: I1002 19:56:27.695335 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hubble-tls\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695434 kubelet[1441]: I1002 19:56:27.695423 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpkkl\" (UniqueName: \"kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-kube-api-access-hpkkl\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695508 kubelet[1441]: I1002 19:56:27.695499 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9dbf2e76-0a84-4002-b2bf-724100123f89-kube-proxy\") pod \"kube-proxy-7xx4g\" (UID: \"9dbf2e76-0a84-4002-b2bf-724100123f89\") " pod="kube-system/kube-proxy-7xx4g" Oct 2 19:56:27.695581 kubelet[1441]: I1002 19:56:27.695573 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcbg\" (UniqueName: \"kubernetes.io/projected/9dbf2e76-0a84-4002-b2bf-724100123f89-kube-api-access-swcbg\") pod \"kube-proxy-7xx4g\" (UID: \"9dbf2e76-0a84-4002-b2bf-724100123f89\") " pod="kube-system/kube-proxy-7xx4g" Oct 2 19:56:27.695700 kubelet[1441]: I1002 19:56:27.695665 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hostproc\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695749 kubelet[1441]: I1002 19:56:27.695721 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-etc-cni-netd\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695749 kubelet[1441]: I1002 19:56:27.695743 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-kernel\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695801 kubelet[1441]: I1002 19:56:27.695783 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-run\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695830 kubelet[1441]: I1002 19:56:27.695804 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-cgroup\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695853 kubelet[1441]: I1002 19:56:27.695832 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-lib-modules\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695878 kubelet[1441]: I1002 19:56:27.695871 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-xtables-lock\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695905 kubelet[1441]: I1002 19:56:27.695894 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-clustermesh-secrets\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695960 kubelet[1441]: I1002 19:56:27.695948 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-config-path\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.695995 kubelet[1441]: I1002 19:56:27.695974 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dbf2e76-0a84-4002-b2bf-724100123f89-lib-modules\") pod \"kube-proxy-7xx4g\" (UID: \"9dbf2e76-0a84-4002-b2bf-724100123f89\") " pod="kube-system/kube-proxy-7xx4g" Oct 2 19:56:27.696087 kubelet[1441]: I1002 19:56:27.696067 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-bpf-maps\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.696125 kubelet[1441]: I1002 19:56:27.696098 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cni-path\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.696180 kubelet[1441]: I1002 19:56:27.696169 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-net\") pod \"cilium-xph9q\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " pod="kube-system/cilium-xph9q" Oct 2 19:56:27.696211 kubelet[1441]: I1002 19:56:27.696188 1441 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:56:27.883611 kubelet[1441]: E1002 19:56:27.883580 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:27.884415 env[1142]: time="2023-10-02T19:56:27.884367574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xx4g,Uid:9dbf2e76-0a84-4002-b2bf-724100123f89,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:28.195741 kubelet[1441]: E1002 19:56:28.195217 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:28.196059 env[1142]: time="2023-10-02T19:56:28.195686085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xph9q,Uid:0a5951a4-b50b-4eb1-a072-46c96f4c3f9f,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:28.438398 env[1142]: time="2023-10-02T19:56:28.438350717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.439285 env[1142]: time="2023-10-02T19:56:28.439257322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.441632 env[1142]: time="2023-10-02T19:56:28.441602957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.442973 env[1142]: time="2023-10-02T19:56:28.442944927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.446178 env[1142]: time="2023-10-02T19:56:28.445777058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.448102 env[1142]: time="2023-10-02T19:56:28.448074163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.449808 env[1142]: time="2023-10-02T19:56:28.449774618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.451994 env[1142]: time="2023-10-02T19:56:28.451929138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:28.476576 env[1142]: time="2023-10-02T19:56:28.476390148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:28.476576 env[1142]: time="2023-10-02T19:56:28.476424783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:28.476576 env[1142]: time="2023-10-02T19:56:28.476434953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:28.476810 env[1142]: time="2023-10-02T19:56:28.476628671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d pid=1546 runtime=io.containerd.runc.v2 Oct 2 19:56:28.476810 env[1142]: time="2023-10-02T19:56:28.476320797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:28.476810 env[1142]: time="2023-10-02T19:56:28.476363440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:28.476810 env[1142]: time="2023-10-02T19:56:28.476376213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:28.477456 env[1142]: time="2023-10-02T19:56:28.476994645Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b0b54a547fc6a757e6168b8729c2d3df0153b87ca46dbd7f39871214840243 pid=1545 runtime=io.containerd.runc.v2 Oct 2 19:56:28.497845 systemd[1]: Started cri-containerd-271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d.scope. Oct 2 19:56:28.505349 systemd[1]: Started cri-containerd-37b0b54a547fc6a757e6168b8729c2d3df0153b87ca46dbd7f39871214840243.scope. Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531470 kernel: audit: type=1400 audit(1696276588.527:579): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531533 kernel: audit: type=1400 audit(1696276588.527:580): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531558 kernel: audit: type=1400 audit(1696276588.527:581): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.535093 kernel: audit: type=1400 audit(1696276588.527:582): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.535183 kernel: audit: type=1400 audit(1696276588.527:583): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.539797 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:56:28.539856 kernel: audit: type=1400 audit(1696276588.527:584): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.539878 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:56:28.539897 kernel: audit: type=1400 audit(1696276588.527:585): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.542147 kernel: audit: backlog limit exceeded Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.527000 audit: BPF prog-id=67 op=LOAD Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313833356432356563333034303532636638363563343664663737 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313833356432356563333034303532636638363563343664663737 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.528000 audit: BPF prog-id=68 op=LOAD Oct 2 19:56:28.528000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313833356432356563333034303532636638363563343664663737 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.529000 audit: BPF prog-id=69 op=LOAD Oct 2 19:56:28.529000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313833356432356563333034303532636638363563343664663737 Oct 2 19:56:28.531000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:56:28.531000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.531000 audit: BPF prog-id=70 op=LOAD Oct 2 19:56:28.531000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237313833356432356563333034303532636638363563343664663737 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.537000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.542000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.542000 audit: BPF prog-id=71 op=LOAD Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337623062353461353437666336613735376536313638623837323963 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337623062353461353437666336613735376536313638623837323963 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit: BPF prog-id=72 op=LOAD Oct 2 19:56:28.543000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337623062353461353437666336613735376536313638623837323963 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit: BPF prog-id=73 op=LOAD Oct 2 19:56:28.543000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337623062353461353437666336613735376536313638623837323963 Oct 2 19:56:28.543000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:56:28.543000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:28.543000 audit: BPF prog-id=74 op=LOAD Oct 2 19:56:28.543000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:28.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337623062353461353437666336613735376536313638623837323963 Oct 2 19:56:28.556840 env[1142]: time="2023-10-02T19:56:28.556786376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xph9q,Uid:0a5951a4-b50b-4eb1-a072-46c96f4c3f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\"" Oct 2 19:56:28.557602 kubelet[1441]: E1002 19:56:28.557578 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:28.559000 env[1142]: time="2023-10-02T19:56:28.558967483Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:56:28.559791 env[1142]: time="2023-10-02T19:56:28.559758009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xx4g,Uid:9dbf2e76-0a84-4002-b2bf-724100123f89,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b0b54a547fc6a757e6168b8729c2d3df0153b87ca46dbd7f39871214840243\"" Oct 2 19:56:28.560689 kubelet[1441]: E1002 19:56:28.560460 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:28.564211 kubelet[1441]: E1002 19:56:28.564188 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:28.804194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133733372.mount: Deactivated successfully. Oct 2 19:56:29.565000 kubelet[1441]: E1002 19:56:29.564903 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:30.565544 kubelet[1441]: E1002 19:56:30.565473 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.566117 kubelet[1441]: E1002 19:56:31.566065 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.320840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522916549.mount: Deactivated successfully. Oct 2 19:56:32.566367 kubelet[1441]: E1002 19:56:32.566312 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.617326 kubelet[1441]: E1002 19:56:32.617200 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:33.567047 kubelet[1441]: E1002 19:56:33.566984 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.567549 kubelet[1441]: E1002 19:56:34.567487 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.651376 env[1142]: time="2023-10-02T19:56:34.651322454Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:34.655036 env[1142]: time="2023-10-02T19:56:34.654987561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:34.657007 env[1142]: time="2023-10-02T19:56:34.656974610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:34.657616 env[1142]: time="2023-10-02T19:56:34.657582707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:56:34.658957 env[1142]: time="2023-10-02T19:56:34.658794258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:56:34.660056 env[1142]: time="2023-10-02T19:56:34.660016814Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:56:34.670525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097094769.mount: Deactivated successfully. Oct 2 19:56:34.675929 env[1142]: time="2023-10-02T19:56:34.675884313Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\"" Oct 2 19:56:34.677012 env[1142]: time="2023-10-02T19:56:34.676979487Z" level=info msg="StartContainer for \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\"" Oct 2 19:56:34.694611 systemd[1]: Started cri-containerd-6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc.scope. Oct 2 19:56:34.718957 systemd[1]: cri-containerd-6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc.scope: Deactivated successfully. Oct 2 19:56:34.852790 env[1142]: time="2023-10-02T19:56:34.852685339Z" level=info msg="shim disconnected" id=6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc Oct 2 19:56:34.852790 env[1142]: time="2023-10-02T19:56:34.852738205Z" level=warning msg="cleaning up after shim disconnected" id=6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc namespace=k8s.io Oct 2 19:56:34.852790 env[1142]: time="2023-10-02T19:56:34.852750811Z" level=info msg="cleaning up dead shim" Oct 2 19:56:34.861724 env[1142]: time="2023-10-02T19:56:34.861666720Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1642 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:34.861996 env[1142]: time="2023-10-02T19:56:34.861908598Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:56:34.864276 env[1142]: time="2023-10-02T19:56:34.864234612Z" level=error msg="Failed to pipe stdout of container \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\"" error="reading from a closed fifo" Oct 2 19:56:34.864455 env[1142]: time="2023-10-02T19:56:34.864413099Z" level=error msg="Failed to pipe stderr of container \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\"" error="reading from a closed fifo" Oct 2 19:56:34.866422 env[1142]: time="2023-10-02T19:56:34.866371694Z" level=error msg="StartContainer for \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:34.866759 kubelet[1441]: E1002 19:56:34.866721 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc" Oct 2 19:56:34.866890 kubelet[1441]: E1002 19:56:34.866864 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:34.866890 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:34.866890 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:56:34.866890 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hpkkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:34.867033 kubelet[1441]: E1002 19:56:34.866903 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:56:35.568330 kubelet[1441]: E1002 19:56:35.568288 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:35.668018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc-rootfs.mount: Deactivated successfully. Oct 2 19:56:35.763551 kubelet[1441]: E1002 19:56:35.763493 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:35.765176 env[1142]: time="2023-10-02T19:56:35.765125671Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:56:35.789224 env[1142]: time="2023-10-02T19:56:35.789177430Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\"" Oct 2 19:56:35.789807 env[1142]: time="2023-10-02T19:56:35.789782429Z" level=info msg="StartContainer for \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\"" Oct 2 19:56:35.814221 systemd[1]: Started cri-containerd-14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750.scope. Oct 2 19:56:35.831917 systemd[1]: cri-containerd-14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750.scope: Deactivated successfully. Oct 2 19:56:35.844214 env[1142]: time="2023-10-02T19:56:35.844158235Z" level=info msg="shim disconnected" id=14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750 Oct 2 19:56:35.844461 env[1142]: time="2023-10-02T19:56:35.844442486Z" level=warning msg="cleaning up after shim disconnected" id=14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750 namespace=k8s.io Oct 2 19:56:35.844524 env[1142]: time="2023-10-02T19:56:35.844511758Z" level=info msg="cleaning up dead shim" Oct 2 19:56:35.855674 env[1142]: time="2023-10-02T19:56:35.855625958Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1679 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:35.856184 env[1142]: time="2023-10-02T19:56:35.856052434Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:56:35.859487 env[1142]: time="2023-10-02T19:56:35.859222374Z" level=error msg="Failed to pipe stdout of container \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\"" error="reading from a closed fifo" Oct 2 19:56:35.859616 env[1142]: time="2023-10-02T19:56:35.859425548Z" level=error msg="Failed to pipe stderr of container \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\"" error="reading from a closed fifo" Oct 2 19:56:35.861020 env[1142]: time="2023-10-02T19:56:35.860952371Z" level=error msg="StartContainer for \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:35.861244 kubelet[1441]: E1002 19:56:35.861209 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750" Oct 2 19:56:35.861402 kubelet[1441]: E1002 19:56:35.861367 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:35.861402 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:35.861402 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:56:35.861402 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hpkkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:35.861661 kubelet[1441]: E1002 19:56:35.861419 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:56:36.568732 kubelet[1441]: E1002 19:56:36.568684 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.668106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750-rootfs.mount: Deactivated successfully. Oct 2 19:56:36.766830 kubelet[1441]: I1002 19:56:36.766739 1441 scope.go:115] "RemoveContainer" containerID="6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc" Oct 2 19:56:36.766969 kubelet[1441]: I1002 19:56:36.766904 1441 scope.go:115] "RemoveContainer" containerID="6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc" Oct 2 19:56:36.767959 env[1142]: time="2023-10-02T19:56:36.767922474Z" level=info msg="RemoveContainer for \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\"" Oct 2 19:56:36.768327 env[1142]: time="2023-10-02T19:56:36.768302639Z" level=info msg="RemoveContainer for \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\"" Oct 2 19:56:36.768462 env[1142]: time="2023-10-02T19:56:36.768431455Z" level=error msg="RemoveContainer for \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\" failed" error="failed to set removing state for container \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\": container is already in removing state" Oct 2 19:56:36.768601 kubelet[1441]: E1002 19:56:36.768576 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\": container is already in removing state" containerID="6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc" Oct 2 19:56:36.768698 kubelet[1441]: I1002 19:56:36.768620 1441 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc} err="rpc error: code = Unknown desc = failed to set removing state for container \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\": container is already in removing state" Oct 2 19:56:36.770484 env[1142]: time="2023-10-02T19:56:36.770431726Z" level=info msg="RemoveContainer for \"6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc\" returns successfully" Oct 2 19:56:36.770940 kubelet[1441]: E1002 19:56:36.770700 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:36.770940 kubelet[1441]: E1002 19:56:36.770905 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:56:36.865283 env[1142]: time="2023-10-02T19:56:36.865158874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:36.867054 env[1142]: time="2023-10-02T19:56:36.867021205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:36.868383 env[1142]: time="2023-10-02T19:56:36.868349903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:36.869833 env[1142]: time="2023-10-02T19:56:36.869809939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:36.870321 env[1142]: time="2023-10-02T19:56:36.870296630Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:56:36.872392 env[1142]: time="2023-10-02T19:56:36.872359528Z" level=info msg="CreateContainer within sandbox \"37b0b54a547fc6a757e6168b8729c2d3df0153b87ca46dbd7f39871214840243\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:56:36.881527 env[1142]: time="2023-10-02T19:56:36.881476416Z" level=info msg="CreateContainer within sandbox \"37b0b54a547fc6a757e6168b8729c2d3df0153b87ca46dbd7f39871214840243\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25263509d3f7f2fc4cc0cb473e5f3f6c9403afa1b2783c913cedb1a67d0b976c\"" Oct 2 19:56:36.881997 env[1142]: time="2023-10-02T19:56:36.881959106Z" level=info msg="StartContainer for \"25263509d3f7f2fc4cc0cb473e5f3f6c9403afa1b2783c913cedb1a67d0b976c\"" Oct 2 19:56:36.899850 systemd[1]: Started cri-containerd-25263509d3f7f2fc4cc0cb473e5f3f6c9403afa1b2783c913cedb1a67d0b976c.scope. Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.930616 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:56:36.930778 kernel: audit: type=1400 audit(1696276596.927:614): avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.930803 kernel: audit: type=1300 audit(1696276596.927:614): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1545 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:36.927000 audit[1701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1545 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:36.933165 kernel: audit: type=1327 audit(1696276596.927:614): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323633353039643366376632666334636330636234373365356633 Oct 2 19:56:36.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323633353039643366376632666334636330636234373365356633 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.936964 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.939110 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.941265 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.941337 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.944553 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.944615 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.947732 kernel: audit: type=1400 audit(1696276596.927:615): avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit: BPF prog-id=75 op=LOAD Oct 2 19:56:36.927000 audit[1701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1545 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:36.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323633353039643366376632666334636330636234373365356633 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit: BPF prog-id=76 op=LOAD Oct 2 19:56:36.927000 audit[1701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1545 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:36.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323633353039643366376632666334636330636234373365356633 Oct 2 19:56:36.927000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:56:36.927000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { perfmon } for pid=1701 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit[1701]: AVC avc: denied { bpf } for pid=1701 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:36.927000 audit: BPF prog-id=77 op=LOAD Oct 2 19:56:36.927000 audit[1701]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1545 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:36.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323633353039643366376632666334636330636234373365356633 Oct 2 19:56:36.954942 env[1142]: time="2023-10-02T19:56:36.954903254Z" level=info msg="StartContainer for \"25263509d3f7f2fc4cc0cb473e5f3f6c9403afa1b2783c913cedb1a67d0b976c\" returns successfully" Oct 2 19:56:37.047169 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:56:37.047280 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:56:37.047309 kernel: IPVS: ipvs loaded. Oct 2 19:56:37.054172 kernel: IPVS: [rr] scheduler registered. Oct 2 19:56:37.060170 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:56:37.064171 kernel: IPVS: [sh] scheduler registered. Oct 2 19:56:37.115000 audit[1759]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.115000 audit[1759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffece948b0 a2=0 a3=ffffa2e926c0 items=0 ppid=1711 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:37.118000 audit[1762]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.118000 audit[1762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb600d10 a2=0 a3=ffffa218c6c0 items=0 ppid=1711 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:37.119000 audit[1765]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.119000 audit[1765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4c1f830 a2=0 a3=ffff9300e6c0 items=0 ppid=1711 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:37.121000 audit[1761]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.121000 audit[1761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe87a210 a2=0 a3=ffff7f8db6c0 items=0 ppid=1711 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:37.124000 audit[1767]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.124000 audit[1767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd401da0 a2=0 a3=ffffbf15c6c0 items=0 ppid=1711 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:37.125000 audit[1768]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.125000 audit[1768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb5d3460 a2=0 a3=ffffba9d26c0 items=0 ppid=1711 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:37.223000 audit[1769]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.223000 audit[1769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd25948e0 a2=0 a3=ffff92e356c0 items=0 ppid=1711 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:37.226000 audit[1771]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.226000 audit[1771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe5521b60 a2=0 a3=ffffbd7026c0 items=0 ppid=1711 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:56:37.230000 audit[1774]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.230000 audit[1774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff8113c30 a2=0 a3=ffffb23086c0 items=0 ppid=1711 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:56:37.233000 audit[1775]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.233000 audit[1775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff33921d0 a2=0 a3=ffffa13e46c0 items=0 ppid=1711 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:37.237000 audit[1777]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.237000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1f53a80 a2=0 a3=ffff967826c0 items=0 ppid=1711 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:37.238000 audit[1778]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.238000 audit[1778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9d26de0 a2=0 a3=ffffb268a6c0 items=0 ppid=1711 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:37.242000 audit[1780]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.242000 audit[1780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe838bfb0 a2=0 a3=ffff8d22c6c0 items=0 ppid=1711 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:37.249000 audit[1783]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.249000 audit[1783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff826ca60 a2=0 a3=ffff9a76a6c0 items=0 ppid=1711 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:56:37.252000 audit[1784]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.252000 audit[1784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5d16410 a2=0 a3=ffff9bc7d6c0 items=0 ppid=1711 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:37.259000 audit[1786]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.259000 audit[1786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2d42e80 a2=0 a3=ffffaa4786c0 items=0 ppid=1711 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:37.261000 audit[1787]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.261000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6822b70 a2=0 a3=ffff8141d6c0 items=0 ppid=1711 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:37.265000 audit[1789]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.265000 audit[1789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcfe31830 a2=0 a3=ffff9dcc16c0 items=0 ppid=1711 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:37.270000 audit[1792]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.270000 audit[1792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4248150 a2=0 a3=ffffb617c6c0 items=0 ppid=1711 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:37.274000 audit[1795]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.274000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc758ebc0 a2=0 a3=ffffa1e136c0 items=0 ppid=1711 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:37.277000 audit[1796]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.277000 audit[1796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcbde5d40 a2=0 a3=ffffb64296c0 items=0 ppid=1711 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:37.282000 audit[1798]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.282000 audit[1798]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc943f0a0 a2=0 a3=ffffb0d316c0 items=0 ppid=1711 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:37.286000 audit[1801]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:37.286000 audit[1801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc716b510 a2=0 a3=ffffb289a6c0 items=0 ppid=1711 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:37.304000 audit[1805]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:37.304000 audit[1805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffdee4a760 a2=0 a3=ffffbac116c0 items=0 ppid=1711 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:37.316000 audit[1805]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1805 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:37.316000 audit[1805]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffdee4a760 a2=0 a3=ffffbac116c0 items=0 ppid=1711 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:37.317000 audit[1809]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.317000 audit[1809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe4e55e00 a2=0 a3=ffff9754a6c0 items=0 ppid=1711 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:37.319000 audit[1811]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1811 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.319000 audit[1811]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffff6d0560 a2=0 a3=ffffac3746c0 items=0 ppid=1711 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:56:37.325000 audit[1814]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.325000 audit[1814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcd9b54f0 a2=0 a3=ffff8bfdd6c0 items=0 ppid=1711 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:56:37.328000 audit[1815]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.328000 audit[1815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2d21830 a2=0 a3=ffffb3cbf6c0 items=0 ppid=1711 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:37.331000 audit[1817]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.331000 audit[1817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffcdc3af0 a2=0 a3=ffff9f0d66c0 items=0 ppid=1711 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:37.332000 audit[1818]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.332000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8f50290 a2=0 a3=ffff978596c0 items=0 ppid=1711 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:37.334000 audit[1820]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1820 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.334000 audit[1820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffe57b8d0 a2=0 a3=ffffacb466c0 items=0 ppid=1711 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:56:37.339000 audit[1823]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1823 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.339000 audit[1823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffa581a60 a2=0 a3=ffff8624d6c0 items=0 ppid=1711 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:37.340000 audit[1824]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.340000 audit[1824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8ede200 a2=0 a3=ffff804ac6c0 items=0 ppid=1711 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:37.343000 audit[1826]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1826 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.343000 audit[1826]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdcf45ca0 a2=0 a3=ffff978d76c0 items=0 ppid=1711 pid=1826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:37.344000 audit[1827]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.344000 audit[1827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe14e1050 a2=0 a3=ffff8e76b6c0 items=0 ppid=1711 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:37.348000 audit[1829]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1829 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.348000 audit[1829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffda910430 a2=0 a3=ffffa5ad96c0 items=0 ppid=1711 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:37.355000 audit[1832]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.355000 audit[1832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffdf3c230 a2=0 a3=ffffb586d6c0 items=0 ppid=1711 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:37.360000 audit[1835]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.360000 audit[1835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe41fc920 a2=0 a3=ffffa5c0e6c0 items=0 ppid=1711 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:56:37.361000 audit[1836]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.361000 audit[1836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe3018010 a2=0 a3=ffffb7a766c0 items=0 ppid=1711 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:37.364000 audit[1838]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.364000 audit[1838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffcc3d86a0 a2=0 a3=ffffb5dc16c0 items=0 ppid=1711 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:37.368000 audit[1841]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:37.368000 audit[1841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffeb318fe0 a2=0 a3=ffffb541f6c0 items=0 ppid=1711 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:37.379000 audit[1845]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:37.379000 audit[1845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff93d0890 a2=0 a3=ffff968b06c0 items=0 ppid=1711 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.379000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:37.379000 audit[1845]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:37.379000 audit[1845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=fffff93d0890 a2=0 a3=ffff968b06c0 items=0 ppid=1711 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:37.379000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:37.568983 kubelet[1441]: E1002 19:56:37.568929 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:37.617589 kubelet[1441]: E1002 19:56:37.617540 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:37.668176 systemd[1]: run-containerd-runc-k8s.io-25263509d3f7f2fc4cc0cb473e5f3f6c9403afa1b2783c913cedb1a67d0b976c-runc.ylPuJ9.mount: Deactivated successfully. Oct 2 19:56:37.769593 kubelet[1441]: E1002 19:56:37.769537 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:37.771236 kubelet[1441]: E1002 19:56:37.771210 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:37.771443 kubelet[1441]: E1002 19:56:37.771417 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:56:37.957244 kubelet[1441]: W1002 19:56:37.957009 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice/cri-containerd-6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc.scope WatchSource:0}: container "6aabe2a34e92eec1b1d5d98cf8c6e09d34a0f68af6938fa9eea72156fb0b5afc" in namespace "k8s.io": not found Oct 2 19:56:38.569462 kubelet[1441]: E1002 19:56:38.569401 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.773290 kubelet[1441]: E1002 19:56:38.772831 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:39.570325 kubelet[1441]: E1002 19:56:39.570272 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.570943 kubelet[1441]: E1002 19:56:40.570891 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:41.068103 kubelet[1441]: W1002 19:56:41.068001 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice/cri-containerd-14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750.scope WatchSource:0}: task 14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750 not found: not found Oct 2 19:56:41.073925 kubelet[1441]: E1002 19:56:41.073900 1441 cadvisor_stats_provider.go:457] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dbf2e76_0a84_4002_b2bf_724100123f89.slice/cri-containerd-25263509d3f7f2fc4cc0cb473e5f3f6c9403afa1b2783c913cedb1a67d0b976c.scope\": RecentStats: unable to find data in memory cache]" Oct 2 19:56:41.571076 kubelet[1441]: E1002 19:56:41.571027 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:42.572192 kubelet[1441]: E1002 19:56:42.572104 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:42.619206 kubelet[1441]: E1002 19:56:42.619180 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:43.573111 kubelet[1441]: E1002 19:56:43.573061 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.573622 kubelet[1441]: E1002 19:56:44.573579 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:45.574490 kubelet[1441]: E1002 19:56:45.574424 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.575253 kubelet[1441]: E1002 19:56:46.575202 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.963654 update_engine[1132]: I1002 19:56:46.963223 1132 update_attempter.cc:505] Updating boot flags... Oct 2 19:56:47.551033 kubelet[1441]: E1002 19:56:47.550993 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.576239 kubelet[1441]: E1002 19:56:47.576191 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.619957 kubelet[1441]: E1002 19:56:47.619882 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:48.577239 kubelet[1441]: E1002 19:56:48.577201 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.577892 kubelet[1441]: E1002 19:56:49.577845 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:50.578674 kubelet[1441]: E1002 19:56:50.578624 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.579118 kubelet[1441]: E1002 19:56:51.579082 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.580202 kubelet[1441]: E1002 19:56:52.580101 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.628537 kubelet[1441]: E1002 19:56:52.628508 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:52.721658 kubelet[1441]: E1002 19:56:52.721626 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:52.723641 env[1142]: time="2023-10-02T19:56:52.723600717Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:56:52.731223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410655075.mount: Deactivated successfully. Oct 2 19:56:52.733385 env[1142]: time="2023-10-02T19:56:52.733346888Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\"" Oct 2 19:56:52.733757 env[1142]: time="2023-10-02T19:56:52.733731681Z" level=info msg="StartContainer for \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\"" Oct 2 19:56:52.750007 systemd[1]: Started cri-containerd-f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e.scope. Oct 2 19:56:52.771937 systemd[1]: cri-containerd-f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e.scope: Deactivated successfully. Oct 2 19:56:52.775029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e-rootfs.mount: Deactivated successfully. Oct 2 19:56:52.877745 env[1142]: time="2023-10-02T19:56:52.877624649Z" level=info msg="shim disconnected" id=f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e Oct 2 19:56:52.877948 env[1142]: time="2023-10-02T19:56:52.877925186Z" level=warning msg="cleaning up after shim disconnected" id=f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e namespace=k8s.io Oct 2 19:56:52.878012 env[1142]: time="2023-10-02T19:56:52.877998720Z" level=info msg="cleaning up dead shim" Oct 2 19:56:52.885727 env[1142]: time="2023-10-02T19:56:52.885686540Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1885 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:52.886094 env[1142]: time="2023-10-02T19:56:52.886045128Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:56:52.889007 env[1142]: time="2023-10-02T19:56:52.886322021Z" level=error msg="Failed to pipe stdout of container \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\"" error="reading from a closed fifo" Oct 2 19:56:52.889131 env[1142]: time="2023-10-02T19:56:52.888952561Z" level=error msg="Failed to pipe stderr of container \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\"" error="reading from a closed fifo" Oct 2 19:56:52.891075 env[1142]: time="2023-10-02T19:56:52.891018193Z" level=error msg="StartContainer for \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:52.891267 kubelet[1441]: E1002 19:56:52.891243 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e" Oct 2 19:56:52.891401 kubelet[1441]: E1002 19:56:52.891378 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:52.891401 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:52.891401 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:56:52.891401 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hpkkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:52.891553 kubelet[1441]: E1002 19:56:52.891422 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:56:53.580244 kubelet[1441]: E1002 19:56:53.580189 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:53.798716 kubelet[1441]: I1002 19:56:53.798266 1441 scope.go:115] "RemoveContainer" containerID="14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750" Oct 2 19:56:53.798716 kubelet[1441]: I1002 19:56:53.798547 1441 scope.go:115] "RemoveContainer" containerID="14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750" Oct 2 19:56:53.799557 env[1142]: time="2023-10-02T19:56:53.799525770Z" level=info msg="RemoveContainer for \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\"" Oct 2 19:56:53.800015 env[1142]: time="2023-10-02T19:56:53.799988974Z" level=info msg="RemoveContainer for \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\"" Oct 2 19:56:53.800092 env[1142]: time="2023-10-02T19:56:53.800063668Z" level=error msg="RemoveContainer for \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\" failed" error="failed to set removing state for container \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\": container is already in removing state" Oct 2 19:56:53.800717 kubelet[1441]: E1002 19:56:53.800210 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\": container is already in removing state" containerID="14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750" Oct 2 19:56:53.800717 kubelet[1441]: E1002 19:56:53.800237 1441 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750": container is already in removing state; Skipping pod "cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)" Oct 2 19:56:53.800717 kubelet[1441]: E1002 19:56:53.800286 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:53.800717 kubelet[1441]: E1002 19:56:53.800474 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:56:53.802176 env[1142]: time="2023-10-02T19:56:53.801674680Z" level=info msg="RemoveContainer for \"14fa30d39ff2091ab299b47a3c6588fc77f1d23da73dc4c5f917513b21eac750\" returns successfully" Oct 2 19:56:54.581357 kubelet[1441]: E1002 19:56:54.581291 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.582306 kubelet[1441]: E1002 19:56:55.582231 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.982039 kubelet[1441]: W1002 19:56:55.981991 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice/cri-containerd-f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e.scope WatchSource:0}: task f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e not found: not found Oct 2 19:56:56.583306 kubelet[1441]: E1002 19:56:56.583226 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.583723 kubelet[1441]: E1002 19:56:57.583644 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:57.629212 kubelet[1441]: E1002 19:56:57.629179 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:56:58.584237 kubelet[1441]: E1002 19:56:58.584173 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:59.584476 kubelet[1441]: E1002 19:56:59.584413 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:00.585361 kubelet[1441]: E1002 19:57:00.585283 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.585481 kubelet[1441]: E1002 19:57:01.585430 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.586162 kubelet[1441]: E1002 19:57:02.586096 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.630025 kubelet[1441]: E1002 19:57:02.629995 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:03.587120 kubelet[1441]: E1002 19:57:03.587069 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:04.587355 kubelet[1441]: E1002 19:57:04.587308 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:05.588333 kubelet[1441]: E1002 19:57:05.588290 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.588445 kubelet[1441]: E1002 19:57:06.588400 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.720948 kubelet[1441]: E1002 19:57:06.720918 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:06.721989 kubelet[1441]: E1002 19:57:06.721631 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:57:07.551761 kubelet[1441]: E1002 19:57:07.551682 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.589839 kubelet[1441]: E1002 19:57:07.589794 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.631253 kubelet[1441]: E1002 19:57:07.631213 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:08.590158 kubelet[1441]: E1002 19:57:08.590079 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.590832 kubelet[1441]: E1002 19:57:09.590762 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:10.591580 kubelet[1441]: E1002 19:57:10.591521 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.591977 kubelet[1441]: E1002 19:57:11.591886 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.592262 kubelet[1441]: E1002 19:57:12.592191 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:12.632186 kubelet[1441]: E1002 19:57:12.632133 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:13.592704 kubelet[1441]: E1002 19:57:13.592656 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.592864 kubelet[1441]: E1002 19:57:14.592826 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:15.593523 kubelet[1441]: E1002 19:57:15.593478 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.594965 kubelet[1441]: E1002 19:57:16.594929 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.596154 kubelet[1441]: E1002 19:57:17.596110 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.633572 kubelet[1441]: E1002 19:57:17.633540 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:18.596937 kubelet[1441]: E1002 19:57:18.596891 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:19.597542 kubelet[1441]: E1002 19:57:19.597488 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:20.597934 kubelet[1441]: E1002 19:57:20.597896 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.599211 kubelet[1441]: E1002 19:57:21.599158 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.721592 kubelet[1441]: E1002 19:57:21.721554 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:21.723421 env[1142]: time="2023-10-02T19:57:21.723331720Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:57:21.732454 env[1142]: time="2023-10-02T19:57:21.732408481Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\"" Oct 2 19:57:21.733561 env[1142]: time="2023-10-02T19:57:21.732781269Z" level=info msg="StartContainer for \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\"" Oct 2 19:57:21.750821 systemd[1]: run-containerd-runc-k8s.io-d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9-runc.wr5a5D.mount: Deactivated successfully. Oct 2 19:57:21.752124 systemd[1]: Started cri-containerd-d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9.scope. Oct 2 19:57:21.770885 systemd[1]: cri-containerd-d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9.scope: Deactivated successfully. Oct 2 19:57:21.778348 env[1142]: time="2023-10-02T19:57:21.778298607Z" level=info msg="shim disconnected" id=d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9 Oct 2 19:57:21.778348 env[1142]: time="2023-10-02T19:57:21.778347451Z" level=warning msg="cleaning up after shim disconnected" id=d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9 namespace=k8s.io Oct 2 19:57:21.778524 env[1142]: time="2023-10-02T19:57:21.778357572Z" level=info msg="cleaning up dead shim" Oct 2 19:57:21.786298 env[1142]: time="2023-10-02T19:57:21.786243404Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1926 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:21.786556 env[1142]: time="2023-10-02T19:57:21.786496503Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:21.786709 env[1142]: time="2023-10-02T19:57:21.786661155Z" level=error msg="Failed to pipe stdout of container \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\"" error="reading from a closed fifo" Oct 2 19:57:21.786771 env[1142]: time="2023-10-02T19:57:21.786742241Z" level=error msg="Failed to pipe stderr of container \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\"" error="reading from a closed fifo" Oct 2 19:57:21.789086 env[1142]: time="2023-10-02T19:57:21.789042774Z" level=error msg="StartContainer for \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:21.789301 kubelet[1441]: E1002 19:57:21.789280 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9" Oct 2 19:57:21.789494 kubelet[1441]: E1002 19:57:21.789479 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:21.789494 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:21.789494 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:57:21.789494 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hpkkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:21.789679 kubelet[1441]: E1002 19:57:21.789667 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:57:21.841167 kubelet[1441]: I1002 19:57:21.841129 1441 scope.go:115] "RemoveContainer" containerID="f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e" Oct 2 19:57:21.841419 kubelet[1441]: I1002 19:57:21.841405 1441 scope.go:115] "RemoveContainer" containerID="f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e" Oct 2 19:57:21.842455 env[1142]: time="2023-10-02T19:57:21.842379460Z" level=info msg="RemoveContainer for \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\"" Oct 2 19:57:21.842885 env[1142]: time="2023-10-02T19:57:21.842849415Z" level=info msg="RemoveContainer for \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\"" Oct 2 19:57:21.845511 env[1142]: time="2023-10-02T19:57:21.845443050Z" level=error msg="RemoveContainer for \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\" failed" error="rpc error: code = NotFound desc = get container info: container \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\" in namespace \"k8s.io\": not found" Oct 2 19:57:21.845579 env[1142]: time="2023-10-02T19:57:21.845475852Z" level=info msg="RemoveContainer for \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\" returns successfully" Oct 2 19:57:21.845763 kubelet[1441]: E1002 19:57:21.845723 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\" in namespace \"k8s.io\": not found" containerID="f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e" Oct 2 19:57:21.846128 kubelet[1441]: E1002 19:57:21.846110 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:21.846255 kubelet[1441]: I1002 19:57:21.846203 1441 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e} err="rpc error: code = NotFound desc = get container info: container \"f080da6fcef7e8574dd8e4e0fd45e24e282a29e422e2b3d4c2615f400a9ee24e\" in namespace \"k8s.io\": not found" Oct 2 19:57:21.846372 kubelet[1441]: E1002 19:57:21.846357 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:57:22.599759 kubelet[1441]: E1002 19:57:22.599713 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.634535 kubelet[1441]: E1002 19:57:22.634513 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:22.729868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9-rootfs.mount: Deactivated successfully. Oct 2 19:57:23.600120 kubelet[1441]: E1002 19:57:23.600071 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.600924 kubelet[1441]: E1002 19:57:24.600855 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.882453 kubelet[1441]: W1002 19:57:24.882344 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice/cri-containerd-d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9.scope WatchSource:0}: task d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9 not found: not found Oct 2 19:57:25.601908 kubelet[1441]: E1002 19:57:25.601841 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:26.602319 kubelet[1441]: E1002 19:57:26.602261 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.551259 kubelet[1441]: E1002 19:57:27.551188 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.603473 kubelet[1441]: E1002 19:57:27.603408 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.636053 kubelet[1441]: E1002 19:57:27.636006 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:28.604319 kubelet[1441]: E1002 19:57:28.604283 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.604985 kubelet[1441]: E1002 19:57:29.604919 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:30.605162 kubelet[1441]: E1002 19:57:30.605050 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.606234 kubelet[1441]: E1002 19:57:31.606181 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.606924 kubelet[1441]: E1002 19:57:32.606873 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.636869 kubelet[1441]: E1002 19:57:32.636844 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:33.607858 kubelet[1441]: E1002 19:57:33.607795 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:34.608858 kubelet[1441]: E1002 19:57:34.608800 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:35.609595 kubelet[1441]: E1002 19:57:35.609539 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.610074 kubelet[1441]: E1002 19:57:36.610030 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.721502 kubelet[1441]: E1002 19:57:36.721473 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:36.721686 kubelet[1441]: E1002 19:57:36.721670 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:57:37.610536 kubelet[1441]: E1002 19:57:37.610497 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:37.638075 kubelet[1441]: E1002 19:57:37.638047 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:38.610880 kubelet[1441]: E1002 19:57:38.610811 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:39.611653 kubelet[1441]: E1002 19:57:39.611616 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:40.612977 kubelet[1441]: E1002 19:57:40.612927 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.613899 kubelet[1441]: E1002 19:57:41.613860 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.614972 kubelet[1441]: E1002 19:57:42.614927 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.638891 kubelet[1441]: E1002 19:57:42.638858 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:43.616024 kubelet[1441]: E1002 19:57:43.615960 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:44.616885 kubelet[1441]: E1002 19:57:44.616830 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.617807 kubelet[1441]: E1002 19:57:45.617764 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:46.618159 kubelet[1441]: E1002 19:57:46.618077 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.551122 kubelet[1441]: E1002 19:57:47.551084 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.618455 kubelet[1441]: E1002 19:57:47.618431 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.640129 kubelet[1441]: E1002 19:57:47.640093 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:48.619323 kubelet[1441]: E1002 19:57:48.619258 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.619614 kubelet[1441]: E1002 19:57:49.619567 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.720854 kubelet[1441]: E1002 19:57:49.720820 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:49.721073 kubelet[1441]: E1002 19:57:49.721051 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:57:50.620576 kubelet[1441]: E1002 19:57:50.620531 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.620990 kubelet[1441]: E1002 19:57:51.620951 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.621424 kubelet[1441]: E1002 19:57:52.621397 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.641249 kubelet[1441]: E1002 19:57:52.641227 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:53.622261 kubelet[1441]: E1002 19:57:53.622214 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.624779 kubelet[1441]: E1002 19:57:54.624702 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:55.625488 kubelet[1441]: E1002 19:57:55.625440 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.626425 kubelet[1441]: E1002 19:57:56.626389 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.627403 kubelet[1441]: E1002 19:57:57.627340 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.642036 kubelet[1441]: E1002 19:57:57.642013 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:57:58.627477 kubelet[1441]: E1002 19:57:58.627438 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.628435 kubelet[1441]: E1002 19:57:59.628373 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:59.721659 kubelet[1441]: E1002 19:57:59.721624 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:00.629127 kubelet[1441]: E1002 19:58:00.629082 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:01.630058 kubelet[1441]: E1002 19:58:01.630016 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:01.721260 kubelet[1441]: E1002 19:58:01.721231 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:01.721872 kubelet[1441]: E1002 19:58:01.721853 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:58:02.630417 kubelet[1441]: E1002 19:58:02.630370 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.643005 kubelet[1441]: E1002 19:58:02.642986 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:03.631267 kubelet[1441]: E1002 19:58:03.631220 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:04.632197 kubelet[1441]: E1002 19:58:04.632153 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.633096 kubelet[1441]: E1002 19:58:05.633038 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:06.634021 kubelet[1441]: E1002 19:58:06.633974 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.551396 kubelet[1441]: E1002 19:58:07.551342 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.634892 kubelet[1441]: E1002 19:58:07.634850 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.644489 kubelet[1441]: E1002 19:58:07.644455 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:08.635776 kubelet[1441]: E1002 19:58:08.635739 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.637191 kubelet[1441]: E1002 19:58:09.637118 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:10.637496 kubelet[1441]: E1002 19:58:10.637434 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.638303 kubelet[1441]: E1002 19:58:11.638235 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.638945 kubelet[1441]: E1002 19:58:12.638880 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.645533 kubelet[1441]: E1002 19:58:12.645509 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:13.639631 kubelet[1441]: E1002 19:58:13.639571 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:13.721583 kubelet[1441]: E1002 19:58:13.721524 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:13.723379 env[1142]: time="2023-10-02T19:58:13.723340953Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:58:13.811888 env[1142]: time="2023-10-02T19:58:13.811827472Z" level=info msg="CreateContainer within sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\"" Oct 2 19:58:13.813106 env[1142]: time="2023-10-02T19:58:13.812319837Z" level=info msg="StartContainer for \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\"" Oct 2 19:58:13.828127 systemd[1]: Started cri-containerd-23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03.scope. Oct 2 19:58:13.877289 systemd[1]: cri-containerd-23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03.scope: Deactivated successfully. Oct 2 19:58:13.880827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03-rootfs.mount: Deactivated successfully. Oct 2 19:58:13.886638 env[1142]: time="2023-10-02T19:58:13.886582228Z" level=info msg="shim disconnected" id=23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03 Oct 2 19:58:13.886638 env[1142]: time="2023-10-02T19:58:13.886639468Z" level=warning msg="cleaning up after shim disconnected" id=23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03 namespace=k8s.io Oct 2 19:58:13.886827 env[1142]: time="2023-10-02T19:58:13.886650589Z" level=info msg="cleaning up dead shim" Oct 2 19:58:13.895638 env[1142]: time="2023-10-02T19:58:13.895519789Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1970 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:13Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:13.895837 env[1142]: time="2023-10-02T19:58:13.895773191Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:58:13.896012 env[1142]: time="2023-10-02T19:58:13.895956313Z" level=error msg="Failed to pipe stdout of container \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\"" error="reading from a closed fifo" Oct 2 19:58:13.896244 env[1142]: time="2023-10-02T19:58:13.896152674Z" level=error msg="Failed to pipe stderr of container \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\"" error="reading from a closed fifo" Oct 2 19:58:13.898275 env[1142]: time="2023-10-02T19:58:13.898222893Z" level=error msg="StartContainer for \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:13.898485 kubelet[1441]: E1002 19:58:13.898449 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03" Oct 2 19:58:13.898569 kubelet[1441]: E1002 19:58:13.898550 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:13.898569 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:13.898569 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:58:13.898569 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hpkkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:13.898687 kubelet[1441]: E1002 19:58:13.898589 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:58:13.918133 kubelet[1441]: I1002 19:58:13.918104 1441 scope.go:115] "RemoveContainer" containerID="d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9" Oct 2 19:58:13.918481 kubelet[1441]: I1002 19:58:13.918450 1441 scope.go:115] "RemoveContainer" containerID="d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9" Oct 2 19:58:13.919556 env[1142]: time="2023-10-02T19:58:13.919511125Z" level=info msg="RemoveContainer for \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\"" Oct 2 19:58:13.919924 env[1142]: time="2023-10-02T19:58:13.919895569Z" level=info msg="RemoveContainer for \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\"" Oct 2 19:58:13.920040 env[1142]: time="2023-10-02T19:58:13.920008010Z" level=error msg="RemoveContainer for \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\" failed" error="failed to set removing state for container \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\": container is already in removing state" Oct 2 19:58:13.920201 kubelet[1441]: E1002 19:58:13.920185 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\": container is already in removing state" containerID="d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9" Oct 2 19:58:13.920246 kubelet[1441]: E1002 19:58:13.920217 1441 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9": container is already in removing state; Skipping pod "cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)" Oct 2 19:58:13.920286 kubelet[1441]: E1002 19:58:13.920274 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:13.920474 kubelet[1441]: E1002 19:58:13.920463 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:58:13.922576 env[1142]: time="2023-10-02T19:58:13.922538113Z" level=info msg="RemoveContainer for \"d12ead3fbb3ac24ce380c7cba49521d409c11a74dc34d74347b38c45c55f7da9\" returns successfully" Oct 2 19:58:14.640603 kubelet[1441]: E1002 19:58:14.640538 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:15.641011 kubelet[1441]: E1002 19:58:15.640963 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:16.642114 kubelet[1441]: E1002 19:58:16.642040 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:16.991890 kubelet[1441]: W1002 19:58:16.991846 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice/cri-containerd-23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03.scope WatchSource:0}: task 23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03 not found: not found Oct 2 19:58:17.642895 kubelet[1441]: E1002 19:58:17.642861 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.646525 kubelet[1441]: E1002 19:58:17.646508 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:18.644359 kubelet[1441]: E1002 19:58:18.644313 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:19.645429 kubelet[1441]: E1002 19:58:19.645297 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:20.646230 kubelet[1441]: E1002 19:58:20.646172 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:21.646665 kubelet[1441]: E1002 19:58:21.646614 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.646966 kubelet[1441]: E1002 19:58:22.646924 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.647396 kubelet[1441]: E1002 19:58:22.647380 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:23.647934 kubelet[1441]: E1002 19:58:23.647890 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.648993 kubelet[1441]: E1002 19:58:24.648946 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.720706 kubelet[1441]: E1002 19:58:24.720674 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:24.720898 kubelet[1441]: E1002 19:58:24.720879 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:58:25.649686 kubelet[1441]: E1002 19:58:25.649644 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:25.969736 update_engine[1132]: I1002 19:58:25.969614 1132 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:58:25.969736 update_engine[1132]: I1002 19:58:25.969652 1132 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:58:25.970223 update_engine[1132]: I1002 19:58:25.970196 1132 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:58:25.970549 update_engine[1132]: I1002 19:58:25.970527 1132 omaha_request_params.cc:62] Current group set to lts Oct 2 19:58:25.970673 update_engine[1132]: I1002 19:58:25.970656 1132 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:58:25.970673 update_engine[1132]: I1002 19:58:25.970664 1132 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:58:25.970726 update_engine[1132]: I1002 19:58:25.970679 1132 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:58:25.970726 update_engine[1132]: I1002 19:58:25.970701 1132 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:58:25.971057 update_engine[1132]: I1002 19:58:25.971032 1132 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:58:25.971057 update_engine[1132]: I1002 19:58:25.971045 1132 omaha_request_action.cc:269] Request: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: Oct 2 19:58:25.971057 update_engine[1132]: I1002 19:58:25.971052 1132 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:58:25.971449 locksmithd[1171]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:58:25.972039 update_engine[1132]: I1002 19:58:25.972013 1132 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:58:25.972190 update_engine[1132]: I1002 19:58:25.972177 1132 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:58:26.649915 kubelet[1441]: E1002 19:58:26.649867 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.124742 update_engine[1132]: I1002 19:58:27.124697 1132 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:58:27.125056 update_engine[1132]: I1002 19:58:27.124949 1132 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:58:27.125242 update_engine[1132]: I1002 19:58:27.125209 1132 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:58:27.414185 update_engine[1132]: I1002 19:58:27.414060 1132 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:58:27.415546 update_engine[1132]: I1002 19:58:27.415517 1132 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:58:27.415546 update_engine[1132]: I1002 19:58:27.415537 1132 omaha_request_action.cc:619] Omaha request response: Oct 2 19:58:27.415546 update_engine[1132]: Oct 2 19:58:27.417772 update_engine[1132]: I1002 19:58:27.417742 1132 omaha_request_action.cc:409] No update. Oct 2 19:58:27.417772 update_engine[1132]: I1002 19:58:27.417763 1132 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:58:27.417772 update_engine[1132]: I1002 19:58:27.417768 1132 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:58:27.417772 update_engine[1132]: I1002 19:58:27.417773 1132 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:58:27.417772 update_engine[1132]: I1002 19:58:27.417775 1132 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:58:27.417772 update_engine[1132]: I1002 19:58:27.417778 1132 update_attempter.cc:302] Processing Done. Oct 2 19:58:27.417952 update_engine[1132]: I1002 19:58:27.417791 1132 update_attempter.cc:338] No update. Oct 2 19:58:27.417952 update_engine[1132]: I1002 19:58:27.417800 1132 update_check_scheduler.cc:74] Next update check in 42m49s Oct 2 19:58:27.418281 locksmithd[1171]: LastCheckedTime=1696276707 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:58:27.550741 kubelet[1441]: E1002 19:58:27.550710 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.648633 kubelet[1441]: E1002 19:58:27.648606 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:27.650971 kubelet[1441]: E1002 19:58:27.650957 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:28.651448 kubelet[1441]: E1002 19:58:28.651399 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:29.652572 kubelet[1441]: E1002 19:58:29.652523 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:30.652707 kubelet[1441]: E1002 19:58:30.652662 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.653193 kubelet[1441]: E1002 19:58:31.653160 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.649981 kubelet[1441]: E1002 19:58:32.649957 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:32.654445 kubelet[1441]: E1002 19:58:32.654421 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:33.655292 kubelet[1441]: E1002 19:58:33.655233 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:34.655675 kubelet[1441]: E1002 19:58:34.655640 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:35.656716 kubelet[1441]: E1002 19:58:35.656679 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.657597 kubelet[1441]: E1002 19:58:36.657560 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.720707 kubelet[1441]: E1002 19:58:36.720673 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:36.720939 kubelet[1441]: E1002 19:58:36.720891 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:58:37.651683 kubelet[1441]: E1002 19:58:37.651643 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:37.659094 kubelet[1441]: E1002 19:58:37.659049 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:38.659540 kubelet[1441]: E1002 19:58:38.659493 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:39.660625 kubelet[1441]: E1002 19:58:39.660573 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:40.661676 kubelet[1441]: E1002 19:58:40.661627 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.662338 kubelet[1441]: E1002 19:58:41.662297 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:42.652541 kubelet[1441]: E1002 19:58:42.652492 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:42.662738 kubelet[1441]: E1002 19:58:42.662708 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:43.663068 kubelet[1441]: E1002 19:58:43.663021 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:44.664394 kubelet[1441]: E1002 19:58:44.664331 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:45.665483 kubelet[1441]: E1002 19:58:45.665445 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:46.666593 kubelet[1441]: E1002 19:58:46.666535 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.550887 kubelet[1441]: E1002 19:58:47.550851 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.653918 kubelet[1441]: E1002 19:58:47.653896 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:47.667281 kubelet[1441]: E1002 19:58:47.667256 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:48.668415 kubelet[1441]: E1002 19:58:48.668382 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:49.669131 kubelet[1441]: E1002 19:58:49.669087 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.669523 kubelet[1441]: E1002 19:58:50.669470 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.721595 kubelet[1441]: E1002 19:58:50.721550 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:50.721770 kubelet[1441]: E1002 19:58:50.721755 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:58:51.670315 kubelet[1441]: E1002 19:58:51.670266 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.654751 kubelet[1441]: E1002 19:58:52.654726 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:52.670444 kubelet[1441]: E1002 19:58:52.670413 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:53.671933 kubelet[1441]: E1002 19:58:53.671889 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:54.672691 kubelet[1441]: E1002 19:58:54.672653 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.673698 kubelet[1441]: E1002 19:58:55.673655 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:56.674076 kubelet[1441]: E1002 19:58:56.674038 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.655834 kubelet[1441]: E1002 19:58:57.655808 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:57.675039 kubelet[1441]: E1002 19:58:57.675007 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:58.675888 kubelet[1441]: E1002 19:58:58.675852 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.676817 kubelet[1441]: E1002 19:58:59.676780 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:00.677414 kubelet[1441]: E1002 19:59:00.677309 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:01.678735 kubelet[1441]: E1002 19:59:01.678692 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.656632 kubelet[1441]: E1002 19:59:02.656596 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:02.679840 kubelet[1441]: E1002 19:59:02.679807 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:03.680662 kubelet[1441]: E1002 19:59:03.680612 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:04.681635 kubelet[1441]: E1002 19:59:04.681599 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:04.721475 kubelet[1441]: E1002 19:59:04.721451 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:04.721816 kubelet[1441]: E1002 19:59:04.721801 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:59:05.682661 kubelet[1441]: E1002 19:59:05.682617 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:06.683529 kubelet[1441]: E1002 19:59:06.683479 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.551572 kubelet[1441]: E1002 19:59:07.551527 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.657715 kubelet[1441]: E1002 19:59:07.657672 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:07.684083 kubelet[1441]: E1002 19:59:07.684047 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:08.684899 kubelet[1441]: E1002 19:59:08.684855 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:09.685971 kubelet[1441]: E1002 19:59:09.685918 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:10.686624 kubelet[1441]: E1002 19:59:10.686576 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.687080 kubelet[1441]: E1002 19:59:11.687021 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.658676 kubelet[1441]: E1002 19:59:12.658643 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:12.687948 kubelet[1441]: E1002 19:59:12.687909 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.688331 kubelet[1441]: E1002 19:59:13.688296 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.721178 kubelet[1441]: E1002 19:59:13.721150 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:14.689426 kubelet[1441]: E1002 19:59:14.689379 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.689694 kubelet[1441]: E1002 19:59:15.689635 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.721516 kubelet[1441]: E1002 19:59:15.721485 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:15.721737 kubelet[1441]: E1002 19:59:15.721720 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-xph9q_kube-system(0a5951a4-b50b-4eb1-a072-46c96f4c3f9f)\"" pod="kube-system/cilium-xph9q" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f Oct 2 19:59:16.690544 kubelet[1441]: E1002 19:59:16.690481 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.659460 kubelet[1441]: E1002 19:59:17.659433 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:17.690928 kubelet[1441]: E1002 19:59:17.690896 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:18.692279 kubelet[1441]: E1002 19:59:18.692217 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:19.693227 kubelet[1441]: E1002 19:59:19.693192 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:20.694805 kubelet[1441]: E1002 19:59:20.694752 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.694982 kubelet[1441]: E1002 19:59:21.694927 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.660563 kubelet[1441]: E1002 19:59:22.660519 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:22.695777 kubelet[1441]: E1002 19:59:22.695718 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:23.696206 kubelet[1441]: E1002 19:59:23.696163 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.697099 kubelet[1441]: E1002 19:59:24.697031 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:25.697780 kubelet[1441]: E1002 19:59:25.697733 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.698088 kubelet[1441]: E1002 19:59:26.698023 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.726855 env[1142]: time="2023-10-02T19:59:26.726810376Z" level=info msg="StopPodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\"" Oct 2 19:59:26.727336 env[1142]: time="2023-10-02T19:59:26.727309316Z" level=info msg="Container to stop \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:26.728499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d-shm.mount: Deactivated successfully. Oct 2 19:59:26.735841 systemd[1]: cri-containerd-271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d.scope: Deactivated successfully. Oct 2 19:59:26.734000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:59:26.736614 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:59:26.736674 kernel: audit: type=1334 audit(1696276766.734:664): prog-id=67 op=UNLOAD Oct 2 19:59:26.739000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:59:26.741166 kernel: audit: type=1334 audit(1696276766.739:665): prog-id=70 op=UNLOAD Oct 2 19:59:26.756659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d-rootfs.mount: Deactivated successfully. Oct 2 19:59:26.760922 env[1142]: time="2023-10-02T19:59:26.760877579Z" level=info msg="shim disconnected" id=271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d Oct 2 19:59:26.760922 env[1142]: time="2023-10-02T19:59:26.760925020Z" level=warning msg="cleaning up after shim disconnected" id=271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d namespace=k8s.io Oct 2 19:59:26.761157 env[1142]: time="2023-10-02T19:59:26.760942741Z" level=info msg="cleaning up dead shim" Oct 2 19:59:26.771302 env[1142]: time="2023-10-02T19:59:26.771249221Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2007 runtime=io.containerd.runc.v2\n" Oct 2 19:59:26.771599 env[1142]: time="2023-10-02T19:59:26.771563433Z" level=info msg="TearDown network for sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" successfully" Oct 2 19:59:26.771599 env[1142]: time="2023-10-02T19:59:26.771591474Z" level=info msg="StopPodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" returns successfully" Oct 2 19:59:26.867809 kubelet[1441]: I1002 19:59:26.867746 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-kernel\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.867809 kubelet[1441]: I1002 19:59:26.867817 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-run\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.867809 kubelet[1441]: I1002 19:59:26.867841 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-cgroup\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868628 kubelet[1441]: I1002 19:59:26.867866 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-config-path\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868628 kubelet[1441]: I1002 19:59:26.867881 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.868628 kubelet[1441]: I1002 19:59:26.867910 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.868628 kubelet[1441]: I1002 19:59:26.867888 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-etc-cni-netd\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868628 kubelet[1441]: I1002 19:59:26.867951 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-lib-modules\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868628 kubelet[1441]: I1002 19:59:26.867973 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-xtables-lock\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868809 kubelet[1441]: I1002 19:59:26.867997 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-clustermesh-secrets\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868809 kubelet[1441]: I1002 19:59:26.868017 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hubble-tls\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868809 kubelet[1441]: I1002 19:59:26.868033 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-net\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868809 kubelet[1441]: I1002 19:59:26.868050 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hostproc\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868809 kubelet[1441]: I1002 19:59:26.868077 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-bpf-maps\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868809 kubelet[1441]: I1002 19:59:26.868097 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpkkl\" (UniqueName: \"kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-kube-api-access-hpkkl\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868951 kubelet[1441]: W1002 19:59:26.868084 1441 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:26.868951 kubelet[1441]: I1002 19:59:26.868114 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cni-path\") pod \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\" (UID: \"0a5951a4-b50b-4eb1-a072-46c96f4c3f9f\") " Oct 2 19:59:26.868951 kubelet[1441]: I1002 19:59:26.868160 1441 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-etc-cni-netd\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.868951 kubelet[1441]: I1002 19:59:26.868172 1441 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-cgroup\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.868951 kubelet[1441]: I1002 19:59:26.867857 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.868951 kubelet[1441]: I1002 19:59:26.868200 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.869090 kubelet[1441]: I1002 19:59:26.868214 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.869090 kubelet[1441]: I1002 19:59:26.868226 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.869090 kubelet[1441]: I1002 19:59:26.868413 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.869090 kubelet[1441]: I1002 19:59:26.868426 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.869090 kubelet[1441]: I1002 19:59:26.868437 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.869223 kubelet[1441]: I1002 19:59:26.868639 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:26.870178 kubelet[1441]: I1002 19:59:26.870076 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:26.871612 systemd[1]: var-lib-kubelet-pods-0a5951a4\x2db50b\x2d4eb1\x2da072\x2d46c96f4c3f9f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:26.872224 kubelet[1441]: I1002 19:59:26.871806 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:26.871716 systemd[1]: var-lib-kubelet-pods-0a5951a4\x2db50b\x2d4eb1\x2da072\x2d46c96f4c3f9f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:26.872615 kubelet[1441]: I1002 19:59:26.872536 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:26.873043 systemd[1]: var-lib-kubelet-pods-0a5951a4\x2db50b\x2d4eb1\x2da072\x2d46c96f4c3f9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpkkl.mount: Deactivated successfully. Oct 2 19:59:26.873410 kubelet[1441]: I1002 19:59:26.873385 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-kube-api-access-hpkkl" (OuterVolumeSpecName: "kube-api-access-hpkkl") pod "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" (UID: "0a5951a4-b50b-4eb1-a072-46c96f4c3f9f"). InnerVolumeSpecName "kube-api-access-hpkkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:26.969563 kubelet[1441]: I1002 19:59:26.969457 1441 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hubble-tls\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.969720 kubelet[1441]: I1002 19:59:26.969708 1441 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-net\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.969777 kubelet[1441]: I1002 19:59:26.969768 1441 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-hostproc\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.969840 kubelet[1441]: I1002 19:59:26.969832 1441 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-xtables-lock\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.969896 kubelet[1441]: I1002 19:59:26.969888 1441 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-clustermesh-secrets\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.969970 kubelet[1441]: I1002 19:59:26.969961 1441 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-bpf-maps\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.970091 kubelet[1441]: I1002 19:59:26.970081 1441 reconciler.go:399] "Volume detached for volume \"kube-api-access-hpkkl\" (UniqueName: \"kubernetes.io/projected/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-kube-api-access-hpkkl\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.970171 kubelet[1441]: I1002 19:59:26.970163 1441 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cni-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.970234 kubelet[1441]: I1002 19:59:26.970226 1441 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-lib-modules\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.970289 kubelet[1441]: I1002 19:59:26.970282 1441 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-host-proc-sys-kernel\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.970349 kubelet[1441]: I1002 19:59:26.970341 1441 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-run\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:26.970409 kubelet[1441]: I1002 19:59:26.970400 1441 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f-cilium-config-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:27.021360 kubelet[1441]: I1002 19:59:27.021326 1441 scope.go:115] "RemoveContainer" containerID="23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03" Oct 2 19:59:27.024764 systemd[1]: Removed slice kubepods-burstable-pod0a5951a4_b50b_4eb1_a072_46c96f4c3f9f.slice. Oct 2 19:59:27.026265 env[1142]: time="2023-10-02T19:59:27.026223239Z" level=info msg="RemoveContainer for \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\"" Oct 2 19:59:27.029627 env[1142]: time="2023-10-02T19:59:27.029599250Z" level=info msg="RemoveContainer for \"23e0b3f0d7d475e28311a3ee0fa73392b01d0d30a5bca9d7bb9f9167491abc03\" returns successfully" Oct 2 19:59:27.055254 kubelet[1441]: I1002 19:59:27.055091 1441 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:59:27.055254 kubelet[1441]: E1002 19:59:27.055156 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: E1002 19:59:27.055166 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: E1002 19:59:27.055173 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: E1002 19:59:27.055179 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: I1002 19:59:27.055194 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: I1002 19:59:27.055200 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: I1002 19:59:27.055206 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: E1002 19:59:27.055219 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: I1002 19:59:27.055231 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.055254 kubelet[1441]: I1002 19:59:27.055247 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="0a5951a4-b50b-4eb1-a072-46c96f4c3f9f" containerName="mount-cgroup" Oct 2 19:59:27.060467 systemd[1]: Created slice kubepods-burstable-pod375f6e1d_fc17_4512_94bc_79a304931c7b.slice. Oct 2 19:59:27.172784 kubelet[1441]: I1002 19:59:27.172717 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-etc-cni-netd\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.173069 kubelet[1441]: I1002 19:59:27.173049 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/375f6e1d-fc17-4512-94bc-79a304931c7b-clustermesh-secrets\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.173326 kubelet[1441]: I1002 19:59:27.173309 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-bpf-maps\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174116 kubelet[1441]: I1002 19:59:27.174087 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cni-path\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174282 kubelet[1441]: I1002 19:59:27.174269 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-run\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174372 kubelet[1441]: I1002 19:59:27.174362 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-cgroup\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174453 kubelet[1441]: I1002 19:59:27.174444 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-xtables-lock\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174551 kubelet[1441]: I1002 19:59:27.174542 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-kernel\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174665 kubelet[1441]: I1002 19:59:27.174636 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjn8\" (UniqueName: \"kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-kube-api-access-qkjn8\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174712 kubelet[1441]: I1002 19:59:27.174698 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-lib-modules\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174857 kubelet[1441]: I1002 19:59:27.174727 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174857 kubelet[1441]: I1002 19:59:27.174770 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-hubble-tls\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174857 kubelet[1441]: I1002 19:59:27.174795 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-hostproc\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.174857 kubelet[1441]: I1002 19:59:27.174832 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-net\") pod \"cilium-rsvlq\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " pod="kube-system/cilium-rsvlq" Oct 2 19:59:27.373850 kubelet[1441]: E1002 19:59:27.373822 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:27.374818 env[1142]: time="2023-10-02T19:59:27.374778482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsvlq,Uid:375f6e1d-fc17-4512-94bc-79a304931c7b,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:27.386406 env[1142]: time="2023-10-02T19:59:27.386337012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:27.386406 env[1142]: time="2023-10-02T19:59:27.386376653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:27.386572 env[1142]: time="2023-10-02T19:59:27.386387734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:27.386781 env[1142]: time="2023-10-02T19:59:27.386749708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d pid=2034 runtime=io.containerd.runc.v2 Oct 2 19:59:27.399573 systemd[1]: Started cri-containerd-99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d.scope. Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428365 kernel: audit: type=1400 audit(1696276767.423:666): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428455 kernel: audit: type=1400 audit(1696276767.423:667): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428477 kernel: audit: type=1400 audit(1696276767.423:668): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.430151 kernel: audit: type=1400 audit(1696276767.423:669): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.433194 kernel: audit: type=1400 audit(1696276767.423:670): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.433259 kernel: audit: type=1400 audit(1696276767.423:671): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.436481 kernel: audit: type=1400 audit(1696276767.423:672): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.436526 kernel: audit: type=1400 audit(1696276767.423:673): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.423000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit: BPF prog-id=78 op=LOAD Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2034 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:27.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939616463613364366164393531313431333066323930366662663062 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2034 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:27.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939616463613364366164393531313431333066323930366662663062 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.425000 audit: BPF prog-id=79 op=LOAD Oct 2 19:59:27.425000 audit[2045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2034 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:27.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939616463613364366164393531313431333066323930366662663062 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.427000 audit: BPF prog-id=80 op=LOAD Oct 2 19:59:27.427000 audit[2045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2034 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:27.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939616463613364366164393531313431333066323930366662663062 Oct 2 19:59:27.428000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:59:27.428000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { perfmon } for pid=2045 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit[2045]: AVC avc: denied { bpf } for pid=2045 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:27.428000 audit: BPF prog-id=81 op=LOAD Oct 2 19:59:27.428000 audit[2045]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2034 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:27.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939616463613364366164393531313431333066323930366662663062 Oct 2 19:59:27.448813 env[1142]: time="2023-10-02T19:59:27.448768801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsvlq,Uid:375f6e1d-fc17-4512-94bc-79a304931c7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\"" Oct 2 19:59:27.449771 kubelet[1441]: E1002 19:59:27.449609 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:27.451489 env[1142]: time="2023-10-02T19:59:27.451451465Z" level=info msg="CreateContainer within sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:59:27.460768 env[1142]: time="2023-10-02T19:59:27.460721666Z" level=info msg="CreateContainer within sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\"" Oct 2 19:59:27.461362 env[1142]: time="2023-10-02T19:59:27.461335490Z" level=info msg="StartContainer for \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\"" Oct 2 19:59:27.476487 systemd[1]: Started cri-containerd-8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852.scope. Oct 2 19:59:27.510195 systemd[1]: cri-containerd-8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852.scope: Deactivated successfully. Oct 2 19:59:27.520122 env[1142]: time="2023-10-02T19:59:27.520063695Z" level=info msg="shim disconnected" id=8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852 Oct 2 19:59:27.520122 env[1142]: time="2023-10-02T19:59:27.520120697Z" level=warning msg="cleaning up after shim disconnected" id=8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852 namespace=k8s.io Oct 2 19:59:27.520320 env[1142]: time="2023-10-02T19:59:27.520130418Z" level=info msg="cleaning up dead shim" Oct 2 19:59:27.527666 env[1142]: time="2023-10-02T19:59:27.527605509Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2093 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:27.527906 env[1142]: time="2023-10-02T19:59:27.527861799Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 19:59:27.528070 env[1142]: time="2023-10-02T19:59:27.528023485Z" level=error msg="Failed to pipe stdout of container \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\"" error="reading from a closed fifo" Oct 2 19:59:27.528117 env[1142]: time="2023-10-02T19:59:27.528058926Z" level=error msg="Failed to pipe stderr of container \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\"" error="reading from a closed fifo" Oct 2 19:59:27.529312 env[1142]: time="2023-10-02T19:59:27.529272934Z" level=error msg="StartContainer for \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:27.529876 kubelet[1441]: E1002 19:59:27.529544 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852" Oct 2 19:59:27.529876 kubelet[1441]: E1002 19:59:27.529631 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:27.529876 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:27.529876 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:59:27.530074 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qkjn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rsvlq_kube-system(375f6e1d-fc17-4512-94bc-79a304931c7b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:27.530157 kubelet[1441]: E1002 19:59:27.529664 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rsvlq" podUID=375f6e1d-fc17-4512-94bc-79a304931c7b Oct 2 19:59:27.551557 kubelet[1441]: E1002 19:59:27.551484 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.661509 kubelet[1441]: E1002 19:59:27.661386 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:27.699113 kubelet[1441]: E1002 19:59:27.699052 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.722028 env[1142]: time="2023-10-02T19:59:27.721989152Z" level=info msg="StopPodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\"" Oct 2 19:59:27.722512 env[1142]: time="2023-10-02T19:59:27.722132718Z" level=info msg="TearDown network for sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" successfully" Oct 2 19:59:27.722595 env[1142]: time="2023-10-02T19:59:27.722517653Z" level=info msg="StopPodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" returns successfully" Oct 2 19:59:27.723074 kubelet[1441]: I1002 19:59:27.723056 1441 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0a5951a4-b50b-4eb1-a072-46c96f4c3f9f path="/var/lib/kubelet/pods/0a5951a4-b50b-4eb1-a072-46c96f4c3f9f/volumes" Oct 2 19:59:28.026891 kubelet[1441]: E1002 19:59:28.026863 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:28.029469 env[1142]: time="2023-10-02T19:59:28.029416998Z" level=info msg="CreateContainer within sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:59:28.038506 env[1142]: time="2023-10-02T19:59:28.038460750Z" level=info msg="CreateContainer within sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\"" Oct 2 19:59:28.039102 env[1142]: time="2023-10-02T19:59:28.039039413Z" level=info msg="StartContainer for \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\"" Oct 2 19:59:28.056902 systemd[1]: Started cri-containerd-e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5.scope. Oct 2 19:59:28.078414 systemd[1]: cri-containerd-e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5.scope: Deactivated successfully. Oct 2 19:59:28.081130 kubelet[1441]: E1002 19:59:28.081019 1441 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:59:28.081130 kubelet[1441]: E1002 19:59:28.081096 1441 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path podName:375f6e1d-fc17-4512-94bc-79a304931c7b nodeName:}" failed. No retries permitted until 2023-10-02 19:59:28.581074613 +0000 UTC m=+201.759614147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path") pod "cilium-rsvlq" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b") : configmap "cilium-config" not found Oct 2 19:59:28.086897 env[1142]: time="2023-10-02T19:59:28.086844878Z" level=info msg="shim disconnected" id=e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5 Oct 2 19:59:28.087162 env[1142]: time="2023-10-02T19:59:28.087124929Z" level=warning msg="cleaning up after shim disconnected" id=e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5 namespace=k8s.io Oct 2 19:59:28.087241 env[1142]: time="2023-10-02T19:59:28.087226493Z" level=info msg="cleaning up dead shim" Oct 2 19:59:28.096866 env[1142]: time="2023-10-02T19:59:28.096739304Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2129 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:28.097352 env[1142]: time="2023-10-02T19:59:28.097291365Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 19:59:28.097554 env[1142]: time="2023-10-02T19:59:28.097509374Z" level=error msg="Failed to pipe stdout of container \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\"" error="reading from a closed fifo" Oct 2 19:59:28.097620 env[1142]: time="2023-10-02T19:59:28.097593937Z" level=error msg="Failed to pipe stderr of container \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\"" error="reading from a closed fifo" Oct 2 19:59:28.100454 env[1142]: time="2023-10-02T19:59:28.100408087Z" level=error msg="StartContainer for \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:28.101003 kubelet[1441]: E1002 19:59:28.100828 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5" Oct 2 19:59:28.101003 kubelet[1441]: E1002 19:59:28.100932 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:28.101003 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:28.101003 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:59:28.101193 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qkjn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-rsvlq_kube-system(375f6e1d-fc17-4512-94bc-79a304931c7b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:28.101244 kubelet[1441]: E1002 19:59:28.100964 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rsvlq" podUID=375f6e1d-fc17-4512-94bc-79a304931c7b Oct 2 19:59:28.584395 kubelet[1441]: E1002 19:59:28.584332 1441 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:59:28.584395 kubelet[1441]: E1002 19:59:28.584407 1441 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path podName:375f6e1d-fc17-4512-94bc-79a304931c7b nodeName:}" failed. No retries permitted until 2023-10-02 19:59:29.584392247 +0000 UTC m=+202.762931781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path") pod "cilium-rsvlq" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b") : configmap "cilium-config" not found Oct 2 19:59:28.699237 kubelet[1441]: E1002 19:59:28.699172 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:28.728636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5-rootfs.mount: Deactivated successfully. Oct 2 19:59:29.029820 kubelet[1441]: I1002 19:59:29.029790 1441 scope.go:115] "RemoveContainer" containerID="8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852" Oct 2 19:59:29.030404 env[1142]: time="2023-10-02T19:59:29.030369248Z" level=info msg="StopPodSandbox for \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\"" Oct 2 19:59:29.030756 env[1142]: time="2023-10-02T19:59:29.030727462Z" level=info msg="Container to stop \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:29.030868 env[1142]: time="2023-10-02T19:59:29.030815705Z" level=info msg="Container to stop \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:29.033008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d-shm.mount: Deactivated successfully. Oct 2 19:59:29.033877 env[1142]: time="2023-10-02T19:59:29.033840983Z" level=info msg="RemoveContainer for \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\"" Oct 2 19:59:29.037660 env[1142]: time="2023-10-02T19:59:29.037482446Z" level=info msg="RemoveContainer for \"8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852\" returns successfully" Oct 2 19:59:29.041607 systemd[1]: cri-containerd-99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d.scope: Deactivated successfully. Oct 2 19:59:29.041000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:59:29.048000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:59:29.061884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d-rootfs.mount: Deactivated successfully. Oct 2 19:59:29.067670 env[1142]: time="2023-10-02T19:59:29.067610184Z" level=info msg="shim disconnected" id=99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d Oct 2 19:59:29.067670 env[1142]: time="2023-10-02T19:59:29.067661146Z" level=warning msg="cleaning up after shim disconnected" id=99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d namespace=k8s.io Oct 2 19:59:29.067670 env[1142]: time="2023-10-02T19:59:29.067671226Z" level=info msg="cleaning up dead shim" Oct 2 19:59:29.076119 env[1142]: time="2023-10-02T19:59:29.076069155Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2160 runtime=io.containerd.runc.v2\n" Oct 2 19:59:29.076401 env[1142]: time="2023-10-02T19:59:29.076375567Z" level=info msg="TearDown network for sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" successfully" Oct 2 19:59:29.076438 env[1142]: time="2023-10-02T19:59:29.076400848Z" level=info msg="StopPodSandbox for \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" returns successfully" Oct 2 19:59:29.188277 kubelet[1441]: I1002 19:59:29.188220 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/375f6e1d-fc17-4512-94bc-79a304931c7b-clustermesh-secrets\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188277 kubelet[1441]: I1002 19:59:29.188274 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-xtables-lock\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188498 kubelet[1441]: I1002 19:59:29.188306 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188498 kubelet[1441]: I1002 19:59:29.188324 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-hostproc\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188498 kubelet[1441]: I1002 19:59:29.188350 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-cgroup\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188498 kubelet[1441]: I1002 19:59:29.188368 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-net\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188498 kubelet[1441]: I1002 19:59:29.188389 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkjn8\" (UniqueName: \"kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-kube-api-access-qkjn8\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188498 kubelet[1441]: I1002 19:59:29.188407 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-lib-modules\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188643 kubelet[1441]: I1002 19:59:29.188392 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.188643 kubelet[1441]: I1002 19:59:29.188430 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-kernel\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188643 kubelet[1441]: I1002 19:59:29.188452 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.188643 kubelet[1441]: I1002 19:59:29.188484 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-hubble-tls\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188643 kubelet[1441]: I1002 19:59:29.188508 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-etc-cni-netd\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188643 kubelet[1441]: I1002 19:59:29.188527 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-bpf-maps\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188781 kubelet[1441]: I1002 19:59:29.188544 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cni-path\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188781 kubelet[1441]: I1002 19:59:29.188559 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-run\") pod \"375f6e1d-fc17-4512-94bc-79a304931c7b\" (UID: \"375f6e1d-fc17-4512-94bc-79a304931c7b\") " Oct 2 19:59:29.188781 kubelet[1441]: I1002 19:59:29.188583 1441 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-kernel\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.188781 kubelet[1441]: I1002 19:59:29.188595 1441 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-xtables-lock\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.188781 kubelet[1441]: I1002 19:59:29.188612 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.188781 kubelet[1441]: W1002 19:59:29.188622 1441 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/375f6e1d-fc17-4512-94bc-79a304931c7b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:29.188934 kubelet[1441]: I1002 19:59:29.188754 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.188934 kubelet[1441]: I1002 19:59:29.188844 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.188934 kubelet[1441]: I1002 19:59:29.188857 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.188934 kubelet[1441]: I1002 19:59:29.188868 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cni-path" (OuterVolumeSpecName: "cni-path") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.189100 kubelet[1441]: I1002 19:59:29.188881 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-hostproc" (OuterVolumeSpecName: "hostproc") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.189100 kubelet[1441]: I1002 19:59:29.188952 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.189100 kubelet[1441]: I1002 19:59:29.188971 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:29.190602 kubelet[1441]: I1002 19:59:29.190560 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:29.191983 systemd[1]: var-lib-kubelet-pods-375f6e1d\x2dfc17\x2d4512\x2d94bc\x2d79a304931c7b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:29.193151 kubelet[1441]: I1002 19:59:29.193108 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375f6e1d-fc17-4512-94bc-79a304931c7b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:29.193214 systemd[1]: var-lib-kubelet-pods-375f6e1d\x2dfc17\x2d4512\x2d94bc\x2d79a304931c7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqkjn8.mount: Deactivated successfully. Oct 2 19:59:29.194005 kubelet[1441]: I1002 19:59:29.193971 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-kube-api-access-qkjn8" (OuterVolumeSpecName: "kube-api-access-qkjn8") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "kube-api-access-qkjn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:29.194377 kubelet[1441]: I1002 19:59:29.194351 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "375f6e1d-fc17-4512-94bc-79a304931c7b" (UID: "375f6e1d-fc17-4512-94bc-79a304931c7b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288751 1441 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-hubble-tls\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288789 1441 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-etc-cni-netd\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288800 1441 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-bpf-maps\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288810 1441 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cni-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288819 1441 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-run\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288830 1441 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/375f6e1d-fc17-4512-94bc-79a304931c7b-clustermesh-secrets\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288840 1441 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-config-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288849 1441 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-hostproc\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.288880 kubelet[1441]: I1002 19:59:29.288857 1441 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-cilium-cgroup\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.290059 kubelet[1441]: I1002 19:59:29.290036 1441 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-host-proc-sys-net\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.290059 kubelet[1441]: I1002 19:59:29.290057 1441 reconciler.go:399] "Volume detached for volume \"kube-api-access-qkjn8\" (UniqueName: \"kubernetes.io/projected/375f6e1d-fc17-4512-94bc-79a304931c7b-kube-api-access-qkjn8\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.290131 kubelet[1441]: I1002 19:59:29.290066 1441 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/375f6e1d-fc17-4512-94bc-79a304931c7b-lib-modules\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 19:59:29.699618 kubelet[1441]: E1002 19:59:29.699472 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:29.727402 systemd[1]: Removed slice kubepods-burstable-pod375f6e1d_fc17_4512_94bc_79a304931c7b.slice. Oct 2 19:59:29.728655 systemd[1]: var-lib-kubelet-pods-375f6e1d\x2dfc17\x2d4512\x2d94bc\x2d79a304931c7b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:30.032571 kubelet[1441]: I1002 19:59:30.032541 1441 scope.go:115] "RemoveContainer" containerID="e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5" Oct 2 19:59:30.034223 env[1142]: time="2023-10-02T19:59:30.034163705Z" level=info msg="RemoveContainer for \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\"" Oct 2 19:59:30.036412 env[1142]: time="2023-10-02T19:59:30.036372391Z" level=info msg="RemoveContainer for \"e85097a6157dfb78d411cb84b6a3850486826bdd3e64008f65d12dfb063ebfc5\" returns successfully" Oct 2 19:59:30.625348 kubelet[1441]: W1002 19:59:30.625277 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod375f6e1d_fc17_4512_94bc_79a304931c7b.slice/cri-containerd-8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852.scope WatchSource:0}: container "8bd22014e570839f71c6e69773daf9e0cbfc7be2f4e6694d0370bf822bcc9852" in namespace "k8s.io": not found Oct 2 19:59:30.700418 kubelet[1441]: E1002 19:59:30.700286 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.195439 kubelet[1441]: I1002 19:59:31.195403 1441 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:59:31.195439 kubelet[1441]: E1002 19:59:31.195448 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="375f6e1d-fc17-4512-94bc-79a304931c7b" containerName="mount-cgroup" Oct 2 19:59:31.195599 kubelet[1441]: E1002 19:59:31.195459 1441 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="375f6e1d-fc17-4512-94bc-79a304931c7b" containerName="mount-cgroup" Oct 2 19:59:31.195599 kubelet[1441]: I1002 19:59:31.195475 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="375f6e1d-fc17-4512-94bc-79a304931c7b" containerName="mount-cgroup" Oct 2 19:59:31.195599 kubelet[1441]: I1002 19:59:31.195480 1441 memory_manager.go:345] "RemoveStaleState removing state" podUID="375f6e1d-fc17-4512-94bc-79a304931c7b" containerName="mount-cgroup" Oct 2 19:59:31.199454 kubelet[1441]: I1002 19:59:31.199422 1441 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:59:31.200736 systemd[1]: Created slice kubepods-besteffort-podbeda364c_1a53_4dd5_ac68_372ebf92e452.slice. Oct 2 19:59:31.204358 kubelet[1441]: W1002 19:59:31.204332 1441 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.10' and this object Oct 2 19:59:31.204422 kubelet[1441]: E1002 19:59:31.204363 1441 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.10' and this object Oct 2 19:59:31.205893 systemd[1]: Created slice kubepods-burstable-podaa431012_a88a_4263_b54d_326eac1071a3.slice. Oct 2 19:59:31.302262 kubelet[1441]: I1002 19:59:31.302204 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa431012-a88a-4263-b54d-326eac1071a3-cilium-config-path\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302262 kubelet[1441]: I1002 19:59:31.302268 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-net\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302435 kubelet[1441]: I1002 19:59:31.302291 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-kernel\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302435 kubelet[1441]: I1002 19:59:31.302325 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-run\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302435 kubelet[1441]: I1002 19:59:31.302347 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-lib-modules\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302435 kubelet[1441]: I1002 19:59:31.302365 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-clustermesh-secrets\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302435 kubelet[1441]: I1002 19:59:31.302384 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cni-path\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302435 kubelet[1441]: I1002 19:59:31.302412 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-cilium-ipsec-secrets\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302581 kubelet[1441]: I1002 19:59:31.302432 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-hubble-tls\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302581 kubelet[1441]: I1002 19:59:31.302455 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beda364c-1a53-4dd5-ac68-372ebf92e452-cilium-config-path\") pod \"cilium-operator-69b677f97c-5bv7c\" (UID: \"beda364c-1a53-4dd5-ac68-372ebf92e452\") " pod="kube-system/cilium-operator-69b677f97c-5bv7c" Oct 2 19:59:31.302581 kubelet[1441]: I1002 19:59:31.302483 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-bpf-maps\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302581 kubelet[1441]: I1002 19:59:31.302502 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-cgroup\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302581 kubelet[1441]: I1002 19:59:31.302519 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-etc-cni-netd\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302581 kubelet[1441]: I1002 19:59:31.302537 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-xtables-lock\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302714 kubelet[1441]: I1002 19:59:31.302568 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb757\" (UniqueName: \"kubernetes.io/projected/beda364c-1a53-4dd5-ac68-372ebf92e452-kube-api-access-zb757\") pod \"cilium-operator-69b677f97c-5bv7c\" (UID: \"beda364c-1a53-4dd5-ac68-372ebf92e452\") " pod="kube-system/cilium-operator-69b677f97c-5bv7c" Oct 2 19:59:31.302714 kubelet[1441]: I1002 19:59:31.302587 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-hostproc\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.302714 kubelet[1441]: I1002 19:59:31.302607 1441 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmftz\" (UniqueName: \"kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-kube-api-access-xmftz\") pod \"cilium-99b4d\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " pod="kube-system/cilium-99b4d" Oct 2 19:59:31.503497 kubelet[1441]: E1002 19:59:31.503449 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:31.504030 env[1142]: time="2023-10-02T19:59:31.503967563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-5bv7c,Uid:beda364c-1a53-4dd5-ac68-372ebf92e452,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:31.519403 env[1142]: time="2023-10-02T19:59:31.519334007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:31.519403 env[1142]: time="2023-10-02T19:59:31.519373648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:31.519541 env[1142]: time="2023-10-02T19:59:31.519413530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:31.519667 env[1142]: time="2023-10-02T19:59:31.519634299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913 pid=2187 runtime=io.containerd.runc.v2 Oct 2 19:59:31.531411 systemd[1]: Started cri-containerd-18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913.scope. Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.566000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.567000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.567000 audit: BPF prog-id=82 op=LOAD Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2187 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:31.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138666634636534306166633831346237343534623130666436653634 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2187 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:31.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138666634636534306166633831346237343534623130666436653634 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.571000 audit: BPF prog-id=83 op=LOAD Oct 2 19:59:31.571000 audit[2197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2187 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:31.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138666634636534306166633831346237343534623130666436653634 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit: BPF prog-id=84 op=LOAD Oct 2 19:59:31.572000 audit[2197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2187 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138666634636534306166633831346237343534623130666436653634 Oct 2 19:59:31.572000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:59:31.572000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { perfmon } for pid=2197 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit[2197]: AVC avc: denied { bpf } for pid=2197 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:31.572000 audit: BPF prog-id=85 op=LOAD Oct 2 19:59:31.572000 audit[2197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2187 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:31.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138666634636534306166633831346237343534623130666436653634 Oct 2 19:59:31.598003 env[1142]: time="2023-10-02T19:59:31.597949455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-5bv7c,Uid:beda364c-1a53-4dd5-ac68-372ebf92e452,Namespace:kube-system,Attempt:0,} returns sandbox id \"18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913\"" Oct 2 19:59:31.598540 kubelet[1441]: E1002 19:59:31.598507 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:31.599463 env[1142]: time="2023-10-02T19:59:31.599421713Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:59:31.700453 kubelet[1441]: E1002 19:59:31.700385 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.723338 kubelet[1441]: I1002 19:59:31.723299 1441 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=375f6e1d-fc17-4512-94bc-79a304931c7b path="/var/lib/kubelet/pods/375f6e1d-fc17-4512-94bc-79a304931c7b/volumes" Oct 2 19:59:32.120583 kubelet[1441]: E1002 19:59:32.120545 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.121256 env[1142]: time="2023-10-02T19:59:32.121217064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99b4d,Uid:aa431012-a88a-4263-b54d-326eac1071a3,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:32.140144 env[1142]: time="2023-10-02T19:59:32.140052605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:32.140144 env[1142]: time="2023-10-02T19:59:32.140092087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:32.140144 env[1142]: time="2023-10-02T19:59:32.140103687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:32.140373 env[1142]: time="2023-10-02T19:59:32.140340817Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888 pid=2229 runtime=io.containerd.runc.v2 Oct 2 19:59:32.153927 systemd[1]: Started cri-containerd-83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888.scope. Oct 2 19:59:32.175816 kernel: kauditd_printk_skb: 108 callbacks suppressed Oct 2 19:59:32.175937 kernel: audit: type=1400 audit(1696276772.173:704): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.178924 kernel: audit: type=1400 audit(1696276772.173:705): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.180686 kernel: audit: type=1400 audit(1696276772.173:706): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.180748 kernel: audit: type=1400 audit(1696276772.173:707): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182522 kernel: audit: type=1400 audit(1696276772.173:708): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.186544 kernel: audit: type=1400 audit(1696276772.173:709): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.186599 kernel: audit: type=1400 audit(1696276772.173:710): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.190435 kernel: audit: type=1400 audit(1696276772.173:711): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.173000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.192715 kernel: audit: type=1400 audit(1696276772.173:712): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.192756 kernel: audit: type=1400 audit(1696276772.174:713): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.174000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.174000 audit: BPF prog-id=86 op=LOAD Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2229 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633664636437623433373161313537393366666331643362636136 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2229 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633664636437623433373161313537393366666331643362636136 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit: BPF prog-id=87 op=LOAD Oct 2 19:59:32.179000 audit[2239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2229 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633664636437623433373161313537393366666331643362636136 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.179000 audit: BPF prog-id=88 op=LOAD Oct 2 19:59:32.179000 audit[2239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2229 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633664636437623433373161313537393366666331643362636136 Oct 2 19:59:32.182000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:59:32.182000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { perfmon } for pid=2239 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit[2239]: AVC avc: denied { bpf } for pid=2239 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.182000 audit: BPF prog-id=89 op=LOAD Oct 2 19:59:32.182000 audit[2239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2229 pid=2239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633664636437623433373161313537393366666331643362636136 Oct 2 19:59:32.212322 env[1142]: time="2023-10-02T19:59:32.212280209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99b4d,Uid:aa431012-a88a-4263-b54d-326eac1071a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\"" Oct 2 19:59:32.212971 kubelet[1441]: E1002 19:59:32.212952 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.214593 env[1142]: time="2023-10-02T19:59:32.214559659Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:59:32.232442 env[1142]: time="2023-10-02T19:59:32.232386881Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\"" Oct 2 19:59:32.233234 env[1142]: time="2023-10-02T19:59:32.233196073Z" level=info msg="StartContainer for \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\"" Oct 2 19:59:32.251598 systemd[1]: Started cri-containerd-511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989.scope. Oct 2 19:59:32.265395 systemd[1]: cri-containerd-511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989.scope: Deactivated successfully. Oct 2 19:59:32.283887 env[1142]: time="2023-10-02T19:59:32.283832266Z" level=info msg="shim disconnected" id=511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989 Oct 2 19:59:32.283887 env[1142]: time="2023-10-02T19:59:32.283881908Z" level=warning msg="cleaning up after shim disconnected" id=511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989 namespace=k8s.io Oct 2 19:59:32.283887 env[1142]: time="2023-10-02T19:59:32.283891669Z" level=info msg="cleaning up dead shim" Oct 2 19:59:32.291733 env[1142]: time="2023-10-02T19:59:32.291690016Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2284 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:32.291989 env[1142]: time="2023-10-02T19:59:32.291939146Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Oct 2 19:59:32.292249 env[1142]: time="2023-10-02T19:59:32.292198796Z" level=error msg="Failed to pipe stderr of container \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\"" error="reading from a closed fifo" Oct 2 19:59:32.293238 env[1142]: time="2023-10-02T19:59:32.293201875Z" level=error msg="Failed to pipe stdout of container \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\"" error="reading from a closed fifo" Oct 2 19:59:32.295239 env[1142]: time="2023-10-02T19:59:32.295188074Z" level=error msg="StartContainer for \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:32.295797 kubelet[1441]: E1002 19:59:32.295402 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989" Oct 2 19:59:32.295797 kubelet[1441]: E1002 19:59:32.295502 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:32.295797 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:32.295797 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:59:32.295962 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xmftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:32.296035 kubelet[1441]: E1002 19:59:32.295608 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 19:59:32.468464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40689215.mount: Deactivated successfully. Oct 2 19:59:32.662805 kubelet[1441]: E1002 19:59:32.662777 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:32.701370 kubelet[1441]: E1002 19:59:32.701312 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.990028 env[1142]: time="2023-10-02T19:59:32.989986831Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:32.991492 env[1142]: time="2023-10-02T19:59:32.991454569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:32.993584 env[1142]: time="2023-10-02T19:59:32.993553571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:32.994256 env[1142]: time="2023-10-02T19:59:32.994225878Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:59:32.996355 env[1142]: time="2023-10-02T19:59:32.996324560Z" level=info msg="CreateContainer within sandbox \"18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:59:33.007905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158861236.mount: Deactivated successfully. Oct 2 19:59:33.010454 env[1142]: time="2023-10-02T19:59:33.010405475Z" level=info msg="CreateContainer within sandbox \"18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\"" Oct 2 19:59:33.011060 env[1142]: time="2023-10-02T19:59:33.011026540Z" level=info msg="StartContainer for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\"" Oct 2 19:59:33.026831 systemd[1]: Started cri-containerd-2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87.scope. Oct 2 19:59:33.040762 kubelet[1441]: E1002 19:59:33.040738 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:33.043064 env[1142]: time="2023-10-02T19:59:33.043018842Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.050000 audit: BPF prog-id=90 op=LOAD Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400019db38 a2=10 a3=0 items=0 ppid=2187 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303631313361643136623534306338613461363435303139663431 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400019d5a0 a2=3c a3=0 items=0 ppid=2187 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303631313361643136623534306338613461363435303139663431 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.051000 audit: BPF prog-id=91 op=LOAD Oct 2 19:59:33.051000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400019d8e0 a2=78 a3=0 items=0 ppid=2187 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303631313361643136623534306338613461363435303139663431 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit: BPF prog-id=92 op=LOAD Oct 2 19:59:33.052000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400019d670 a2=78 a3=0 items=0 ppid=2187 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303631313361643136623534306338613461363435303139663431 Oct 2 19:59:33.052000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:59:33.052000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.052000 audit: BPF prog-id=93 op=LOAD Oct 2 19:59:33.052000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400019db40 a2=78 a3=0 items=0 ppid=2187 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303631313361643136623534306338613461363435303139663431 Oct 2 19:59:33.057523 env[1142]: time="2023-10-02T19:59:33.057455852Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\"" Oct 2 19:59:33.057940 env[1142]: time="2023-10-02T19:59:33.057912190Z" level=info msg="StartContainer for \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\"" Oct 2 19:59:33.070232 env[1142]: time="2023-10-02T19:59:33.069191755Z" level=info msg="StartContainer for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" returns successfully" Oct 2 19:59:33.086035 systemd[1]: Started cri-containerd-9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da.scope. Oct 2 19:59:33.112289 systemd[1]: cri-containerd-9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da.scope: Deactivated successfully. Oct 2 19:59:33.151000 audit[2313]: AVC avc: denied { map_create } for pid=2313 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c12,c760 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c12,c760 tclass=bpf permissive=0 Oct 2 19:59:33.151000 audit[2313]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=40006bd768 a2=48 a3=0 items=0 ppid=2187 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c12,c760 key=(null) Oct 2 19:59:33.151000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:59:33.205754 env[1142]: time="2023-10-02T19:59:33.205704262Z" level=info msg="shim disconnected" id=9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da Oct 2 19:59:33.206276 env[1142]: time="2023-10-02T19:59:33.206166880Z" level=warning msg="cleaning up after shim disconnected" id=9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da namespace=k8s.io Oct 2 19:59:33.206483 env[1142]: time="2023-10-02T19:59:33.206463052Z" level=info msg="cleaning up dead shim" Oct 2 19:59:33.229523 env[1142]: time="2023-10-02T19:59:33.229457679Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2360 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:33.229793 env[1142]: time="2023-10-02T19:59:33.229737570Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:33.230001 env[1142]: time="2023-10-02T19:59:33.229936298Z" level=error msg="Failed to pipe stdout of container \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\"" error="reading from a closed fifo" Oct 2 19:59:33.230244 env[1142]: time="2023-10-02T19:59:33.230203428Z" level=error msg="Failed to pipe stderr of container \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\"" error="reading from a closed fifo" Oct 2 19:59:33.231868 env[1142]: time="2023-10-02T19:59:33.231826772Z" level=error msg="StartContainer for \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:33.232090 kubelet[1441]: E1002 19:59:33.232068 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da" Oct 2 19:59:33.232210 kubelet[1441]: E1002 19:59:33.232182 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:33.232210 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:33.232210 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:59:33.232210 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xmftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:33.232347 kubelet[1441]: E1002 19:59:33.232217 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 19:59:33.701860 kubelet[1441]: E1002 19:59:33.701809 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:34.046562 kubelet[1441]: E1002 19:59:34.046469 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:34.048126 kubelet[1441]: I1002 19:59:34.048105 1441 scope.go:115] "RemoveContainer" containerID="511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989" Oct 2 19:59:34.048482 kubelet[1441]: I1002 19:59:34.048463 1441 scope.go:115] "RemoveContainer" containerID="511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989" Oct 2 19:59:34.049620 env[1142]: time="2023-10-02T19:59:34.049579363Z" level=info msg="RemoveContainer for \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\"" Oct 2 19:59:34.050260 env[1142]: time="2023-10-02T19:59:34.050228629Z" level=info msg="RemoveContainer for \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\"" Oct 2 19:59:34.050492 env[1142]: time="2023-10-02T19:59:34.050382955Z" level=error msg="RemoveContainer for \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\" failed" error="failed to set removing state for container \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\": container is already in removing state" Oct 2 19:59:34.050735 kubelet[1441]: E1002 19:59:34.050717 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\": container is already in removing state" containerID="511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989" Oct 2 19:59:34.050844 kubelet[1441]: E1002 19:59:34.050830 1441 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989": container is already in removing state; Skipping pod "cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)" Oct 2 19:59:34.050985 kubelet[1441]: E1002 19:59:34.050967 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:34.051318 kubelet[1441]: E1002 19:59:34.051300 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 19:59:34.052639 env[1142]: time="2023-10-02T19:59:34.052611243Z" level=info msg="RemoveContainer for \"511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989\" returns successfully" Oct 2 19:59:34.702416 kubelet[1441]: E1002 19:59:34.702377 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:35.050817 kubelet[1441]: E1002 19:59:35.050783 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:35.051329 kubelet[1441]: E1002 19:59:35.051313 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:35.051498 kubelet[1441]: E1002 19:59:35.051486 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 19:59:35.388478 kubelet[1441]: W1002 19:59:35.388379 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa431012_a88a_4263_b54d_326eac1071a3.slice/cri-containerd-511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989.scope WatchSource:0}: container "511c0b7f418804badad5f6501e897663fd72837c584c62bfbc9b36be43cda989" in namespace "k8s.io": not found Oct 2 19:59:35.702865 kubelet[1441]: E1002 19:59:35.702771 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.703717 kubelet[1441]: E1002 19:59:36.703683 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:37.663355 kubelet[1441]: E1002 19:59:37.663327 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:37.704824 kubelet[1441]: E1002 19:59:37.704793 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.498298 kubelet[1441]: W1002 19:59:38.498260 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa431012_a88a_4263_b54d_326eac1071a3.slice/cri-containerd-9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da.scope WatchSource:0}: task 9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da not found: not found Oct 2 19:59:38.705813 kubelet[1441]: E1002 19:59:38.705763 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:39.706134 kubelet[1441]: E1002 19:59:39.706071 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:40.706956 kubelet[1441]: E1002 19:59:40.706888 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.707629 kubelet[1441]: E1002 19:59:41.707586 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.664792 kubelet[1441]: E1002 19:59:42.664767 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:42.708217 kubelet[1441]: E1002 19:59:42.708183 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:43.709116 kubelet[1441]: E1002 19:59:43.709074 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:44.710084 kubelet[1441]: E1002 19:59:44.710022 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:45.711024 kubelet[1441]: E1002 19:59:45.710991 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.711567 kubelet[1441]: E1002 19:59:46.711515 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.551564 kubelet[1441]: E1002 19:59:47.551524 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.666150 kubelet[1441]: E1002 19:59:47.666123 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:47.712504 kubelet[1441]: E1002 19:59:47.712469 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:48.713185 kubelet[1441]: E1002 19:59:48.713142 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.714316 kubelet[1441]: E1002 19:59:49.714273 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.720843 kubelet[1441]: E1002 19:59:49.720821 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:49.723052 env[1142]: time="2023-10-02T19:59:49.722998789Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:59:49.730935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1069142689.mount: Deactivated successfully. Oct 2 19:59:49.734632 env[1142]: time="2023-10-02T19:59:49.734503298Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\"" Oct 2 19:59:49.735343 env[1142]: time="2023-10-02T19:59:49.734962039Z" level=info msg="StartContainer for \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\"" Oct 2 19:59:49.753720 systemd[1]: Started cri-containerd-ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a.scope. Oct 2 19:59:49.773389 systemd[1]: cri-containerd-ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a.scope: Deactivated successfully. Oct 2 19:59:49.782865 env[1142]: time="2023-10-02T19:59:49.782817078Z" level=info msg="shim disconnected" id=ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a Oct 2 19:59:49.783091 env[1142]: time="2023-10-02T19:59:49.783071907Z" level=warning msg="cleaning up after shim disconnected" id=ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a namespace=k8s.io Oct 2 19:59:49.783220 env[1142]: time="2023-10-02T19:59:49.783203302Z" level=info msg="cleaning up dead shim" Oct 2 19:59:49.791399 env[1142]: time="2023-10-02T19:59:49.791348995Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2401 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:49.791802 env[1142]: time="2023-10-02T19:59:49.791749058Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:49.792075 env[1142]: time="2023-10-02T19:59:49.792024926Z" level=error msg="Failed to pipe stdout of container \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\"" error="reading from a closed fifo" Oct 2 19:59:49.792194 env[1142]: time="2023-10-02T19:59:49.792056284Z" level=error msg="Failed to pipe stderr of container \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\"" error="reading from a closed fifo" Oct 2 19:59:49.793618 env[1142]: time="2023-10-02T19:59:49.793574260Z" level=error msg="StartContainer for \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:49.793798 kubelet[1441]: E1002 19:59:49.793772 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a" Oct 2 19:59:49.793896 kubelet[1441]: E1002 19:59:49.793881 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:49.793896 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:49.793896 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 19:59:49.793896 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xmftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:49.794034 kubelet[1441]: E1002 19:59:49.793920 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 19:59:50.074966 kubelet[1441]: I1002 19:59:50.074940 1441 scope.go:115] "RemoveContainer" containerID="9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da" Oct 2 19:59:50.075238 kubelet[1441]: I1002 19:59:50.075216 1441 scope.go:115] "RemoveContainer" containerID="9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da" Oct 2 19:59:50.076399 env[1142]: time="2023-10-02T19:59:50.076353529Z" level=info msg="RemoveContainer for \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\"" Oct 2 19:59:50.076737 env[1142]: time="2023-10-02T19:59:50.076712954Z" level=info msg="RemoveContainer for \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\"" Oct 2 19:59:50.076832 env[1142]: time="2023-10-02T19:59:50.076805791Z" level=error msg="RemoveContainer for \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\" failed" error="failed to set removing state for container \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\": container is already in removing state" Oct 2 19:59:50.077166 kubelet[1441]: E1002 19:59:50.077096 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\": container is already in removing state" containerID="9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da" Oct 2 19:59:50.077276 kubelet[1441]: E1002 19:59:50.077264 1441 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da": container is already in removing state; Skipping pod "cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)" Oct 2 19:59:50.077392 kubelet[1441]: E1002 19:59:50.077382 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:50.077665 kubelet[1441]: E1002 19:59:50.077650 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 19:59:50.078896 env[1142]: time="2023-10-02T19:59:50.078852706Z" level=info msg="RemoveContainer for \"9fb3275686f0d22b8f528b3f2bebf14c28518d5dd700957fa6732537082069da\" returns successfully" Oct 2 19:59:50.714666 kubelet[1441]: E1002 19:59:50.714615 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:50.729124 systemd[1]: run-containerd-runc-k8s.io-ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a-runc.9IShVl.mount: Deactivated successfully. Oct 2 19:59:50.729241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a-rootfs.mount: Deactivated successfully. Oct 2 19:59:51.715606 kubelet[1441]: E1002 19:59:51.715562 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.667679 kubelet[1441]: E1002 19:59:52.667643 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:52.716080 kubelet[1441]: E1002 19:59:52.716041 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.889924 kubelet[1441]: W1002 19:59:52.889876 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa431012_a88a_4263_b54d_326eac1071a3.slice/cri-containerd-ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a.scope WatchSource:0}: task ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a not found: not found Oct 2 19:59:53.716829 kubelet[1441]: E1002 19:59:53.716779 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:54.717514 kubelet[1441]: E1002 19:59:54.717476 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:55.718228 kubelet[1441]: E1002 19:59:55.718180 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:56.718845 kubelet[1441]: E1002 19:59:56.718808 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.668181 kubelet[1441]: E1002 19:59:57.668152 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:57.720062 kubelet[1441]: E1002 19:59:57.720021 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.720894 kubelet[1441]: E1002 19:59:58.720829 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:59.721094 kubelet[1441]: E1002 19:59:59.721015 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.721107 kubelet[1441]: E1002 20:00:00.721025 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:00.721107 kubelet[1441]: E1002 20:00:00.721098 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.721526 kubelet[1441]: E1002 20:00:00.721310 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 20:00:01.722282 kubelet[1441]: E1002 20:00:01.722205 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:02.668783 kubelet[1441]: E1002 20:00:02.668727 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:02.722379 kubelet[1441]: E1002 20:00:02.722325 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.722528 kubelet[1441]: E1002 20:00:03.722467 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:04.723515 kubelet[1441]: E1002 20:00:04.723447 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.723833 kubelet[1441]: E1002 20:00:05.723807 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:06.725391 kubelet[1441]: E1002 20:00:06.725359 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.551695 kubelet[1441]: E1002 20:00:07.551651 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.568617 env[1142]: time="2023-10-02T20:00:07.568582938Z" level=info msg="StopPodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\"" Oct 2 20:00:07.569021 env[1142]: time="2023-10-02T20:00:07.568973969Z" level=info msg="TearDown network for sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" successfully" Oct 2 20:00:07.569102 env[1142]: time="2023-10-02T20:00:07.569082086Z" level=info msg="StopPodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" returns successfully" Oct 2 20:00:07.569561 env[1142]: time="2023-10-02T20:00:07.569533475Z" level=info msg="RemovePodSandbox for \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\"" Oct 2 20:00:07.569625 env[1142]: time="2023-10-02T20:00:07.569566234Z" level=info msg="Forcibly stopping sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\"" Oct 2 20:00:07.569654 env[1142]: time="2023-10-02T20:00:07.569629033Z" level=info msg="TearDown network for sandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" successfully" Oct 2 20:00:07.571816 env[1142]: time="2023-10-02T20:00:07.571784621Z" level=info msg="RemovePodSandbox \"271835d25ec304052cf865c46df77f3c5951e0257652c5de5ff08da89962074d\" returns successfully" Oct 2 20:00:07.572279 env[1142]: time="2023-10-02T20:00:07.572179731Z" level=info msg="StopPodSandbox for \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\"" Oct 2 20:00:07.572467 env[1142]: time="2023-10-02T20:00:07.572427645Z" level=info msg="TearDown network for sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" successfully" Oct 2 20:00:07.572550 env[1142]: time="2023-10-02T20:00:07.572534203Z" level=info msg="StopPodSandbox for \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" returns successfully" Oct 2 20:00:07.572833 env[1142]: time="2023-10-02T20:00:07.572809636Z" level=info msg="RemovePodSandbox for \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\"" Oct 2 20:00:07.572900 env[1142]: time="2023-10-02T20:00:07.572836716Z" level=info msg="Forcibly stopping sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\"" Oct 2 20:00:07.572930 env[1142]: time="2023-10-02T20:00:07.572903994Z" level=info msg="TearDown network for sandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" successfully" Oct 2 20:00:07.574642 env[1142]: time="2023-10-02T20:00:07.574615833Z" level=info msg="RemovePodSandbox \"99adca3d6ad95114130f2906fbf0bad7dccc883d28324e8fa81b830f07adbc5d\" returns successfully" Oct 2 20:00:07.669256 kubelet[1441]: E1002 20:00:07.669209 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:07.726526 kubelet[1441]: E1002 20:00:07.726492 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:08.729978 kubelet[1441]: E1002 20:00:08.729927 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:09.730210 kubelet[1441]: E1002 20:00:09.730169 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.731326 kubelet[1441]: E1002 20:00:10.731254 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.731860 kubelet[1441]: E1002 20:00:11.731814 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.670084 kubelet[1441]: E1002 20:00:12.670015 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:12.732707 kubelet[1441]: E1002 20:00:12.732649 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.733583 kubelet[1441]: E1002 20:00:13.733551 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:14.721557 kubelet[1441]: E1002 20:00:14.721522 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:14.723593 env[1142]: time="2023-10-02T20:00:14.723546218Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:00:14.731612 env[1142]: time="2023-10-02T20:00:14.731557352Z" level=info msg="CreateContainer within sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\"" Oct 2 20:00:14.731974 env[1142]: time="2023-10-02T20:00:14.731893226Z" level=info msg="StartContainer for \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\"" Oct 2 20:00:14.734154 kubelet[1441]: E1002 20:00:14.734122 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:14.746517 systemd[1]: Started cri-containerd-28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43.scope. Oct 2 20:00:14.768504 systemd[1]: cri-containerd-28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43.scope: Deactivated successfully. Oct 2 20:00:14.776316 env[1142]: time="2023-10-02T20:00:14.776269538Z" level=info msg="shim disconnected" id=28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43 Oct 2 20:00:14.776524 env[1142]: time="2023-10-02T20:00:14.776505653Z" level=warning msg="cleaning up after shim disconnected" id=28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43 namespace=k8s.io Oct 2 20:00:14.776602 env[1142]: time="2023-10-02T20:00:14.776588772Z" level=info msg="cleaning up dead shim" Oct 2 20:00:14.784468 env[1142]: time="2023-10-02T20:00:14.784426069Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2443 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:14.784714 env[1142]: time="2023-10-02T20:00:14.784659625Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 20:00:14.784894 env[1142]: time="2023-10-02T20:00:14.784857421Z" level=error msg="Failed to pipe stderr of container \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\"" error="reading from a closed fifo" Oct 2 20:00:14.785245 env[1142]: time="2023-10-02T20:00:14.785218094Z" level=error msg="Failed to pipe stdout of container \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\"" error="reading from a closed fifo" Oct 2 20:00:14.786405 env[1142]: time="2023-10-02T20:00:14.786351794Z" level=error msg="StartContainer for \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:14.787023 kubelet[1441]: E1002 20:00:14.786587 1441 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43" Oct 2 20:00:14.787023 kubelet[1441]: E1002 20:00:14.786699 1441 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:14.787023 kubelet[1441]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:14.787023 kubelet[1441]: rm /hostbin/cilium-mount Oct 2 20:00:14.787224 kubelet[1441]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xmftz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:14.787281 kubelet[1441]: E1002 20:00:14.786733 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 20:00:15.115262 kubelet[1441]: I1002 20:00:15.114908 1441 scope.go:115] "RemoveContainer" containerID="ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a" Oct 2 20:00:15.115262 kubelet[1441]: I1002 20:00:15.115233 1441 scope.go:115] "RemoveContainer" containerID="ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a" Oct 2 20:00:15.116760 env[1142]: time="2023-10-02T20:00:15.116438470Z" level=info msg="RemoveContainer for \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\"" Oct 2 20:00:15.117396 env[1142]: time="2023-10-02T20:00:15.117187857Z" level=info msg="RemoveContainer for \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\"" Oct 2 20:00:15.117396 env[1142]: time="2023-10-02T20:00:15.117263816Z" level=error msg="RemoveContainer for \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\" failed" error="failed to set removing state for container \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\": container is already in removing state" Oct 2 20:00:15.117943 kubelet[1441]: E1002 20:00:15.117601 1441 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\": container is already in removing state" containerID="ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a" Oct 2 20:00:15.117943 kubelet[1441]: E1002 20:00:15.117634 1441 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a": container is already in removing state; Skipping pod "cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)" Oct 2 20:00:15.117943 kubelet[1441]: E1002 20:00:15.117703 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:15.117943 kubelet[1441]: E1002 20:00:15.117911 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 20:00:15.120270 env[1142]: time="2023-10-02T20:00:15.120132006Z" level=info msg="RemoveContainer for \"ab9c296869aeadfca26cf733d166fdd8ade24f4052d760e5ec3ef218bffc7f5a\" returns successfully" Oct 2 20:00:15.729556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43-rootfs.mount: Deactivated successfully. Oct 2 20:00:15.734497 kubelet[1441]: E1002 20:00:15.734472 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:16.740974 kubelet[1441]: E1002 20:00:16.740355 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.670991 kubelet[1441]: E1002 20:00:17.670966 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:17.740642 kubelet[1441]: E1002 20:00:17.740588 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.880603 kubelet[1441]: W1002 20:00:17.880566 1441 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa431012_a88a_4263_b54d_326eac1071a3.slice/cri-containerd-28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43.scope WatchSource:0}: task 28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43 not found: not found Oct 2 20:00:18.741091 kubelet[1441]: E1002 20:00:18.741044 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:19.741948 kubelet[1441]: E1002 20:00:19.741910 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:20.742057 kubelet[1441]: E1002 20:00:20.742015 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.742608 kubelet[1441]: E1002 20:00:21.742579 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:22.672168 kubelet[1441]: E1002 20:00:22.672116 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:22.743804 kubelet[1441]: E1002 20:00:22.743749 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:23.744038 kubelet[1441]: E1002 20:00:23.744001 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:24.744611 kubelet[1441]: E1002 20:00:24.744562 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.745638 kubelet[1441]: E1002 20:00:25.745599 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.720704 kubelet[1441]: E1002 20:00:26.720676 1441 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:26.720890 kubelet[1441]: E1002 20:00:26.720873 1441 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-99b4d_kube-system(aa431012-a88a-4263-b54d-326eac1071a3)\"" pod="kube-system/cilium-99b4d" podUID=aa431012-a88a-4263-b54d-326eac1071a3 Oct 2 20:00:26.746155 kubelet[1441]: E1002 20:00:26.746082 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.551106 kubelet[1441]: E1002 20:00:27.551042 1441 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:27.673539 kubelet[1441]: E1002 20:00:27.673503 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:27.746829 kubelet[1441]: E1002 20:00:27.746786 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:28.747921 kubelet[1441]: E1002 20:00:28.747873 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:29.748855 kubelet[1441]: E1002 20:00:29.748800 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:30.749807 kubelet[1441]: E1002 20:00:30.749559 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.750440 kubelet[1441]: E1002 20:00:31.750384 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:32.388207 env[1142]: time="2023-10-02T20:00:32.388156791Z" level=info msg="StopPodSandbox for \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\"" Oct 2 20:00:32.390148 env[1142]: time="2023-10-02T20:00:32.388224430Z" level=info msg="Container to stop \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:32.389531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888-shm.mount: Deactivated successfully. Oct 2 20:00:32.397629 systemd[1]: cri-containerd-83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888.scope: Deactivated successfully. Oct 2 20:00:32.399187 kernel: kauditd_printk_skb: 107 callbacks suppressed Oct 2 20:00:32.399273 kernel: audit: type=1334 audit(1696276832.396:741): prog-id=86 op=UNLOAD Oct 2 20:00:32.396000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:00:32.399380 env[1142]: time="2023-10-02T20:00:32.399124488Z" level=info msg="StopContainer for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" with timeout 30 (s)" Oct 2 20:00:32.399742 env[1142]: time="2023-10-02T20:00:32.399710285Z" level=info msg="Stop container \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" with signal terminated" Oct 2 20:00:32.401000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:00:32.403152 kernel: audit: type=1334 audit(1696276832.401:742): prog-id=89 op=UNLOAD Oct 2 20:00:32.413424 systemd[1]: cri-containerd-2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87.scope: Deactivated successfully. Oct 2 20:00:32.412000 audit: BPF prog-id=90 op=UNLOAD Oct 2 20:00:32.415160 kernel: audit: type=1334 audit(1696276832.412:743): prog-id=90 op=UNLOAD Oct 2 20:00:32.417985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.418000 audit: BPF prog-id=93 op=UNLOAD Oct 2 20:00:32.421189 kernel: audit: type=1334 audit(1696276832.418:744): prog-id=93 op=UNLOAD Oct 2 20:00:32.424766 env[1142]: time="2023-10-02T20:00:32.424711581Z" level=info msg="shim disconnected" id=83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888 Oct 2 20:00:32.424944 env[1142]: time="2023-10-02T20:00:32.424769141Z" level=warning msg="cleaning up after shim disconnected" id=83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888 namespace=k8s.io Oct 2 20:00:32.424944 env[1142]: time="2023-10-02T20:00:32.424780541Z" level=info msg="cleaning up dead shim" Oct 2 20:00:32.434289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.437280 env[1142]: time="2023-10-02T20:00:32.437232950Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2487 runtime=io.containerd.runc.v2\n" Oct 2 20:00:32.437580 env[1142]: time="2023-10-02T20:00:32.437555268Z" level=info msg="TearDown network for sandbox \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" successfully" Oct 2 20:00:32.437634 env[1142]: time="2023-10-02T20:00:32.437580908Z" level=info msg="StopPodSandbox for \"83c6dcd7b4371a15793ffc1d3bca6414145f694b20cb9a508892b43078be3888\" returns successfully" Oct 2 20:00:32.438313 env[1142]: time="2023-10-02T20:00:32.438273984Z" level=info msg="shim disconnected" id=2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87 Oct 2 20:00:32.438457 env[1142]: time="2023-10-02T20:00:32.438437503Z" level=warning msg="cleaning up after shim disconnected" id=2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87 namespace=k8s.io Oct 2 20:00:32.438533 env[1142]: time="2023-10-02T20:00:32.438518982Z" level=info msg="cleaning up dead shim" Oct 2 20:00:32.449375 env[1142]: time="2023-10-02T20:00:32.449333720Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2506 runtime=io.containerd.runc.v2\n" Oct 2 20:00:32.451180 env[1142]: time="2023-10-02T20:00:32.451133950Z" level=info msg="StopContainer for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" returns successfully" Oct 2 20:00:32.451828 env[1142]: time="2023-10-02T20:00:32.451790386Z" level=info msg="StopPodSandbox for \"18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913\"" Oct 2 20:00:32.451895 env[1142]: time="2023-10-02T20:00:32.451848546Z" level=info msg="Container to stop \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:32.452983 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913-shm.mount: Deactivated successfully. Oct 2 20:00:32.459767 systemd[1]: cri-containerd-18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913.scope: Deactivated successfully. Oct 2 20:00:32.458000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:00:32.461162 kernel: audit: type=1334 audit(1696276832.458:745): prog-id=82 op=UNLOAD Oct 2 20:00:32.465000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:00:32.467167 kernel: audit: type=1334 audit(1696276832.465:746): prog-id=85 op=UNLOAD Oct 2 20:00:32.480901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.487675 env[1142]: time="2023-10-02T20:00:32.487629821Z" level=info msg="shim disconnected" id=18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913 Oct 2 20:00:32.487675 env[1142]: time="2023-10-02T20:00:32.487673901Z" level=warning msg="cleaning up after shim disconnected" id=18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913 namespace=k8s.io Oct 2 20:00:32.487928 env[1142]: time="2023-10-02T20:00:32.487687941Z" level=info msg="cleaning up dead shim" Oct 2 20:00:32.496156 env[1142]: time="2023-10-02T20:00:32.496109852Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2537 runtime=io.containerd.runc.v2\n" Oct 2 20:00:32.496445 env[1142]: time="2023-10-02T20:00:32.496420131Z" level=info msg="TearDown network for sandbox \"18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913\" successfully" Oct 2 20:00:32.496489 env[1142]: time="2023-10-02T20:00:32.496446051Z" level=info msg="StopPodSandbox for \"18ff4ce40afc814b7454b10fd6e64f8c0b89e660efd9ed96254ddbb76a70a913\" returns successfully" Oct 2 20:00:32.543905 kubelet[1441]: I1002 20:00:32.543854 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-kernel\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.543905 kubelet[1441]: I1002 20:00:32.543904 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-hubble-tls\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544092 kubelet[1441]: I1002 20:00:32.543925 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-etc-cni-netd\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544092 kubelet[1441]: I1002 20:00:32.543948 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa431012-a88a-4263-b54d-326eac1071a3-cilium-config-path\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544092 kubelet[1441]: I1002 20:00:32.543964 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-run\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544092 kubelet[1441]: I1002 20:00:32.543972 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.544092 kubelet[1441]: I1002 20:00:32.543982 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cni-path\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544092 kubelet[1441]: I1002 20:00:32.544007 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cni-path" (OuterVolumeSpecName: "cni-path") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.544281 kubelet[1441]: I1002 20:00:32.544042 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb757\" (UniqueName: \"kubernetes.io/projected/beda364c-1a53-4dd5-ac68-372ebf92e452-kube-api-access-zb757\") pod \"beda364c-1a53-4dd5-ac68-372ebf92e452\" (UID: \"beda364c-1a53-4dd5-ac68-372ebf92e452\") " Oct 2 20:00:32.544281 kubelet[1441]: I1002 20:00:32.544067 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-hostproc\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544281 kubelet[1441]: I1002 20:00:32.544086 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-lib-modules\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544281 kubelet[1441]: I1002 20:00:32.544106 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beda364c-1a53-4dd5-ac68-372ebf92e452-cilium-config-path\") pod \"beda364c-1a53-4dd5-ac68-372ebf92e452\" (UID: \"beda364c-1a53-4dd5-ac68-372ebf92e452\") " Oct 2 20:00:32.544281 kubelet[1441]: I1002 20:00:32.544124 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-bpf-maps\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544281 kubelet[1441]: I1002 20:00:32.544159 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-cgroup\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544437 kubelet[1441]: I1002 20:00:32.544179 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-xtables-lock\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544437 kubelet[1441]: I1002 20:00:32.544199 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmftz\" (UniqueName: \"kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-kube-api-access-xmftz\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.544437 kubelet[1441]: W1002 20:00:32.544194 1441 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/aa431012-a88a-4263-b54d-326eac1071a3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:32.544437 kubelet[1441]: I1002 20:00:32.544230 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.544532 kubelet[1441]: I1002 20:00:32.544445 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-hostproc" (OuterVolumeSpecName: "hostproc") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.544532 kubelet[1441]: I1002 20:00:32.544468 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.544583 kubelet[1441]: W1002 20:00:32.544565 1441 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/beda364c-1a53-4dd5-ac68-372ebf92e452/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:32.546594 kubelet[1441]: I1002 20:00:32.543946 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.546594 kubelet[1441]: I1002 20:00:32.544681 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.546594 kubelet[1441]: I1002 20:00:32.544701 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.546594 kubelet[1441]: I1002 20:00:32.544717 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.546594 kubelet[1441]: I1002 20:00:32.544733 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.546313 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa431012-a88a-4263-b54d-326eac1071a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.544218 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-net\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.546371 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-clustermesh-secrets\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.546394 1441 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-cilium-ipsec-secrets\") pod \"aa431012-a88a-4263-b54d-326eac1071a3\" (UID: \"aa431012-a88a-4263-b54d-326eac1071a3\") " Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.546428 1441 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-xtables-lock\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.546441 1441 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-net\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.546801 kubelet[1441]: I1002 20:00:32.546451 1441 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-host-proc-sys-kernel\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546460 1441 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-etc-cni-netd\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546469 1441 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa431012-a88a-4263-b54d-326eac1071a3-cilium-config-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546478 1441 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-run\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546486 1441 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cni-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546495 1441 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-hostproc\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546503 1441 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-lib-modules\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546512 1441 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-bpf-maps\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547014 kubelet[1441]: I1002 20:00:32.546520 1441 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa431012-a88a-4263-b54d-326eac1071a3-cilium-cgroup\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.547226 kubelet[1441]: I1002 20:00:32.546742 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beda364c-1a53-4dd5-ac68-372ebf92e452-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "beda364c-1a53-4dd5-ac68-372ebf92e452" (UID: "beda364c-1a53-4dd5-ac68-372ebf92e452"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:32.547663 kubelet[1441]: I1002 20:00:32.547633 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beda364c-1a53-4dd5-ac68-372ebf92e452-kube-api-access-zb757" (OuterVolumeSpecName: "kube-api-access-zb757") pod "beda364c-1a53-4dd5-ac68-372ebf92e452" (UID: "beda364c-1a53-4dd5-ac68-372ebf92e452"). InnerVolumeSpecName "kube-api-access-zb757". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:32.548055 kubelet[1441]: I1002 20:00:32.548027 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:32.549256 kubelet[1441]: I1002 20:00:32.549224 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:32.549899 kubelet[1441]: I1002 20:00:32.549866 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-kube-api-access-xmftz" (OuterVolumeSpecName: "kube-api-access-xmftz") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "kube-api-access-xmftz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:32.550796 kubelet[1441]: I1002 20:00:32.550773 1441 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aa431012-a88a-4263-b54d-326eac1071a3" (UID: "aa431012-a88a-4263-b54d-326eac1071a3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:32.647242 kubelet[1441]: I1002 20:00:32.647124 1441 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-hubble-tls\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.647406 kubelet[1441]: I1002 20:00:32.647395 1441 reconciler.go:399] "Volume detached for volume \"kube-api-access-zb757\" (UniqueName: \"kubernetes.io/projected/beda364c-1a53-4dd5-ac68-372ebf92e452-kube-api-access-zb757\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.647499 kubelet[1441]: I1002 20:00:32.647489 1441 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beda364c-1a53-4dd5-ac68-372ebf92e452-cilium-config-path\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.647569 kubelet[1441]: I1002 20:00:32.647560 1441 reconciler.go:399] "Volume detached for volume \"kube-api-access-xmftz\" (UniqueName: \"kubernetes.io/projected/aa431012-a88a-4263-b54d-326eac1071a3-kube-api-access-xmftz\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.647632 kubelet[1441]: I1002 20:00:32.647624 1441 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-clustermesh-secrets\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.647705 kubelet[1441]: I1002 20:00:32.647695 1441 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa431012-a88a-4263-b54d-326eac1071a3-cilium-ipsec-secrets\") on node \"10.0.0.10\" DevicePath \"\"" Oct 2 20:00:32.674807 kubelet[1441]: E1002 20:00:32.674785 1441 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:32.750526 kubelet[1441]: E1002 20:00:32.750484 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:33.145103 kubelet[1441]: I1002 20:00:33.145076 1441 scope.go:115] "RemoveContainer" containerID="2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87" Oct 2 20:00:33.146207 env[1142]: time="2023-10-02T20:00:33.146171094Z" level=info msg="RemoveContainer for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\"" Oct 2 20:00:33.148630 systemd[1]: Removed slice kubepods-besteffort-podbeda364c_1a53_4dd5_ac68_372ebf92e452.slice. Oct 2 20:00:33.149110 env[1142]: time="2023-10-02T20:00:33.149078520Z" level=info msg="RemoveContainer for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" returns successfully" Oct 2 20:00:33.149322 kubelet[1441]: I1002 20:00:33.149306 1441 scope.go:115] "RemoveContainer" containerID="2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87" Oct 2 20:00:33.150401 env[1142]: time="2023-10-02T20:00:33.150341793Z" level=error msg="ContainerStatus for \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\": not found" Oct 2 20:00:33.150576 kubelet[1441]: E1002 20:00:33.150564 1441 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\": not found" containerID="2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87" Oct 2 20:00:33.150641 kubelet[1441]: I1002 20:00:33.150599 1441 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87} err="failed to get container status \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\": rpc error: code = NotFound desc = an error occurred when try to find container \"2206113ad16b540c8a4a645019f41259aea51395b0ff97cb3df88d0519052b87\": not found" Oct 2 20:00:33.150641 kubelet[1441]: I1002 20:00:33.150611 1441 scope.go:115] "RemoveContainer" containerID="28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43" Oct 2 20:00:33.152037 systemd[1]: Removed slice kubepods-burstable-podaa431012_a88a_4263_b54d_326eac1071a3.slice. Oct 2 20:00:33.152147 env[1142]: time="2023-10-02T20:00:33.152110864Z" level=info msg="RemoveContainer for \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\"" Oct 2 20:00:33.156275 env[1142]: time="2023-10-02T20:00:33.156242843Z" level=info msg="RemoveContainer for \"28f7179185259ee998b74ee88b0b213ae9c0b1c17494b7d7eed319bd2862ac43\" returns successfully" Oct 2 20:00:33.389467 systemd[1]: var-lib-kubelet-pods-aa431012\x2da88a\x2d4263\x2db54d\x2d326eac1071a3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:33.389571 systemd[1]: var-lib-kubelet-pods-aa431012\x2da88a\x2d4263\x2db54d\x2d326eac1071a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmftz.mount: Deactivated successfully. Oct 2 20:00:33.389635 systemd[1]: var-lib-kubelet-pods-beda364c\x2d1a53\x2d4dd5\x2dac68\x2d372ebf92e452-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzb757.mount: Deactivated successfully. Oct 2 20:00:33.389684 systemd[1]: var-lib-kubelet-pods-aa431012\x2da88a\x2d4263\x2db54d\x2d326eac1071a3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:33.389732 systemd[1]: var-lib-kubelet-pods-aa431012\x2da88a\x2d4263\x2db54d\x2d326eac1071a3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:00:33.723644 kubelet[1441]: I1002 20:00:33.723616 1441 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=aa431012-a88a-4263-b54d-326eac1071a3 path="/var/lib/kubelet/pods/aa431012-a88a-4263-b54d-326eac1071a3/volumes" Oct 2 20:00:33.724178 kubelet[1441]: I1002 20:00:33.724163 1441 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=beda364c-1a53-4dd5-ac68-372ebf92e452 path="/var/lib/kubelet/pods/beda364c-1a53-4dd5-ac68-372ebf92e452/volumes" Oct 2 20:00:33.751554 kubelet[1441]: E1002 20:00:33.751521 1441 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"