Oct 2 19:30:41.765623 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:30:41.765645 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:30:41.765653 kernel: efi: EFI v2.70 by EDK II Oct 2 19:30:41.765659 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:30:41.765664 kernel: random: crng init done Oct 2 19:30:41.765670 kernel: ACPI: Early table checksum verification disabled Oct 2 19:30:41.765676 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:30:41.765683 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:30:41.765689 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765695 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765701 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765706 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765711 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765717 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765729 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765735 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765741 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:41.765748 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:30:41.765754 kernel: NUMA: Failed to initialise from firmware Oct 2 19:30:41.765760 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:30:41.765766 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Oct 2 19:30:41.765772 kernel: Zone ranges: Oct 2 19:30:41.765778 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:30:41.765786 kernel: DMA32 empty Oct 2 19:30:41.765791 kernel: Normal empty Oct 2 19:30:41.765797 kernel: Movable zone start for each node Oct 2 19:30:41.765803 kernel: Early memory node ranges Oct 2 19:30:41.765808 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:30:41.765814 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:30:41.765820 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:30:41.765826 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:30:41.765832 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:30:41.765838 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:30:41.765844 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:30:41.765850 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:30:41.765857 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:30:41.765863 kernel: psci: probing for conduit method from ACPI. Oct 2 19:30:41.765869 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:30:41.765876 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:30:41.765882 kernel: psci: Trusted OS migration not required Oct 2 19:30:41.765892 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:30:41.765899 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:30:41.765907 kernel: ACPI: SRAT not present Oct 2 19:30:41.765914 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:30:41.765920 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:30:41.765926 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:30:41.765933 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:30:41.765939 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:30:41.765945 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:30:41.765951 kernel: CPU features: detected: Spectre-v4 Oct 2 19:30:41.765957 kernel: CPU features: detected: Spectre-BHB Oct 2 19:30:41.765964 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:30:41.765970 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:30:41.765976 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:30:41.765982 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:30:41.765988 kernel: Policy zone: DMA Oct 2 19:30:41.765996 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:30:41.766002 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:30:41.766008 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:30:41.766014 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:30:41.766020 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:30:41.766027 kernel: Memory: 2459276K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113012K reserved, 0K cma-reserved) Oct 2 19:30:41.766035 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:30:41.766041 kernel: trace event string verifier disabled Oct 2 19:30:41.766047 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:30:41.766053 kernel: rcu: RCU event tracing is enabled. Oct 2 19:30:41.766060 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:30:41.766067 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:30:41.766073 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:30:41.766079 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:30:41.766086 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:30:41.766092 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:30:41.766110 kernel: GICv3: 256 SPIs implemented Oct 2 19:30:41.766118 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:30:41.766124 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:30:41.766130 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:30:41.766136 kernel: GICv3: 16 PPIs implemented Oct 2 19:30:41.766143 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:30:41.766149 kernel: ACPI: SRAT not present Oct 2 19:30:41.766155 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:30:41.766161 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:30:41.766168 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:30:41.766174 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:30:41.766180 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:30:41.766186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:41.766194 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:30:41.766200 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:30:41.766207 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:30:41.766213 kernel: arm-pv: using stolen time PV Oct 2 19:30:41.766219 kernel: Console: colour dummy device 80x25 Oct 2 19:30:41.766226 kernel: ACPI: Core revision 20210730 Oct 2 19:30:41.766232 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:30:41.766239 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:30:41.766245 kernel: LSM: Security Framework initializing Oct 2 19:30:41.766251 kernel: SELinux: Initializing. Oct 2 19:30:41.766259 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:30:41.766266 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:30:41.766273 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:30:41.766279 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:30:41.766285 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:30:41.766359 kernel: Remapping and enabling EFI services. Oct 2 19:30:41.766366 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:30:41.766373 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:30:41.766379 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:30:41.766393 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:30:41.766399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:41.766435 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:30:41.766443 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:30:41.766449 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:30:41.766456 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:30:41.766462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:41.766469 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:30:41.766475 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:30:41.766481 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:30:41.766517 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:30:41.766526 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:41.766532 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:30:41.766538 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:30:41.766550 kernel: SMP: Total of 4 processors activated. Oct 2 19:30:41.766559 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:30:41.766566 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:30:41.766619 kernel: CPU features: detected: Common not Private translations Oct 2 19:30:41.766627 kernel: CPU features: detected: CRC32 instructions Oct 2 19:30:41.766634 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:30:41.766641 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:30:41.766647 kernel: CPU features: detected: Privileged Access Never Oct 2 19:30:41.766657 kernel: CPU features: detected: RAS Extension Support Oct 2 19:30:41.766694 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:30:41.766702 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:30:41.766709 kernel: alternatives: patching kernel code Oct 2 19:30:41.766719 kernel: devtmpfs: initialized Oct 2 19:30:41.766726 kernel: KASLR enabled Oct 2 19:30:41.766733 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:30:41.766740 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:30:41.766774 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:30:41.766784 kernel: SMBIOS 3.0.0 present. Oct 2 19:30:41.766790 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:30:41.766797 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:30:41.766804 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:30:41.766811 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:30:41.766820 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:30:41.766827 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:30:41.766833 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Oct 2 19:30:41.766840 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:30:41.766846 kernel: cpuidle: using governor menu Oct 2 19:30:41.766853 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:30:41.766860 kernel: ASID allocator initialised with 32768 entries Oct 2 19:30:41.766866 kernel: ACPI: bus type PCI registered Oct 2 19:30:41.766873 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:30:41.766881 kernel: Serial: AMBA PL011 UART driver Oct 2 19:30:41.766888 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:30:41.766894 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:30:41.766901 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:30:41.766908 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:30:41.766914 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:30:41.766921 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:30:41.766928 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:30:41.766934 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:30:41.766942 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:30:41.766949 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:30:41.766956 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:30:41.766962 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:30:41.766969 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:30:41.766975 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:30:41.766982 kernel: ACPI: Interpreter enabled Oct 2 19:30:41.766989 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:30:41.766995 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:30:41.767003 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:30:41.767011 kernel: printk: console [ttyAMA0] enabled Oct 2 19:30:41.767017 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:30:41.767201 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:30:41.767270 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:30:41.767332 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:30:41.767406 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:30:41.767479 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:30:41.767489 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:30:41.767496 kernel: PCI host bridge to bus 0000:00 Oct 2 19:30:41.767568 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:30:41.767629 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:30:41.767698 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:30:41.767761 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:30:41.767851 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:30:41.767929 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:30:41.767997 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:30:41.768063 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:30:41.768156 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:30:41.768225 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:30:41.768292 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:30:41.768363 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:30:41.768434 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:30:41.768501 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:30:41.768559 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:30:41.768568 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:30:41.768574 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:30:41.768581 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:30:41.768590 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:30:41.768597 kernel: iommu: Default domain type: Translated Oct 2 19:30:41.768603 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:30:41.768610 kernel: vgaarb: loaded Oct 2 19:30:41.768617 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:30:41.768624 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:30:41.768631 kernel: PTP clock support registered Oct 2 19:30:41.768638 kernel: Registered efivars operations Oct 2 19:30:41.768645 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:30:41.768651 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:30:41.768660 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:30:41.768666 kernel: pnp: PnP ACPI init Oct 2 19:30:41.768737 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:30:41.768747 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:30:41.768754 kernel: NET: Registered PF_INET protocol family Oct 2 19:30:41.768761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:30:41.768767 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:30:41.768774 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:30:41.768783 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:30:41.768789 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:30:41.768796 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:30:41.768803 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:30:41.768810 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:30:41.768816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:30:41.768823 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:30:41.768830 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:30:41.768838 kernel: kvm [1]: HYP mode not available Oct 2 19:30:41.768844 kernel: Initialise system trusted keyrings Oct 2 19:30:41.768851 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:30:41.768858 kernel: Key type asymmetric registered Oct 2 19:30:41.768864 kernel: Asymmetric key parser 'x509' registered Oct 2 19:30:41.768871 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:30:41.768878 kernel: io scheduler mq-deadline registered Oct 2 19:30:41.768884 kernel: io scheduler kyber registered Oct 2 19:30:41.768891 kernel: io scheduler bfq registered Oct 2 19:30:41.768898 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:30:41.768906 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:30:41.768913 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:30:41.768976 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:30:41.768985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:30:41.768992 kernel: thunder_xcv, ver 1.0 Oct 2 19:30:41.768998 kernel: thunder_bgx, ver 1.0 Oct 2 19:30:41.769005 kernel: nicpf, ver 1.0 Oct 2 19:30:41.769011 kernel: nicvf, ver 1.0 Oct 2 19:30:41.769080 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:30:41.769186 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:30:41 UTC (1696275041) Oct 2 19:30:41.769197 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:30:41.769204 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:30:41.769210 kernel: Segment Routing with IPv6 Oct 2 19:30:41.769217 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:30:41.769224 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:30:41.769231 kernel: Key type dns_resolver registered Oct 2 19:30:41.769237 kernel: registered taskstats version 1 Oct 2 19:30:41.769246 kernel: Loading compiled-in X.509 certificates Oct 2 19:30:41.769253 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:30:41.769260 kernel: Key type .fscrypt registered Oct 2 19:30:41.769266 kernel: Key type fscrypt-provisioning registered Oct 2 19:30:41.769273 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:30:41.769280 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:30:41.769287 kernel: ima: No architecture policies found Oct 2 19:30:41.769294 kernel: Freeing unused kernel memory: 34560K Oct 2 19:30:41.769300 kernel: Run /init as init process Oct 2 19:30:41.769308 kernel: with arguments: Oct 2 19:30:41.769315 kernel: /init Oct 2 19:30:41.769322 kernel: with environment: Oct 2 19:30:41.769328 kernel: HOME=/ Oct 2 19:30:41.769334 kernel: TERM=linux Oct 2 19:30:41.769341 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:30:41.769349 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:30:41.769358 systemd[1]: Detected virtualization kvm. Oct 2 19:30:41.769367 systemd[1]: Detected architecture arm64. Oct 2 19:30:41.769374 systemd[1]: Running in initrd. Oct 2 19:30:41.769381 systemd[1]: No hostname configured, using default hostname. Oct 2 19:30:41.769394 systemd[1]: Hostname set to . Oct 2 19:30:41.769402 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:30:41.769409 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:30:41.769416 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:30:41.769423 systemd[1]: Reached target cryptsetup.target. Oct 2 19:30:41.769432 systemd[1]: Reached target paths.target. Oct 2 19:30:41.769439 systemd[1]: Reached target slices.target. Oct 2 19:30:41.769446 systemd[1]: Reached target swap.target. Oct 2 19:30:41.769453 systemd[1]: Reached target timers.target. Oct 2 19:30:41.769460 systemd[1]: Listening on iscsid.socket. Oct 2 19:30:41.769468 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:30:41.769475 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:30:41.769484 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:30:41.769491 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:30:41.769498 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:30:41.769505 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:30:41.769512 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:30:41.769519 systemd[1]: Reached target sockets.target. Oct 2 19:30:41.769526 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:30:41.769533 systemd[1]: Finished network-cleanup.service. Oct 2 19:30:41.769540 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:30:41.769548 systemd[1]: Starting systemd-journald.service... Oct 2 19:30:41.769555 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:30:41.769563 systemd[1]: Starting systemd-resolved.service... Oct 2 19:30:41.769570 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:30:41.769577 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:30:41.769584 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:30:41.769591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:30:41.769599 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:30:41.769606 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:30:41.769615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:30:41.769626 kernel: audit: type=1130 audit(1696275041.765:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.769638 systemd-journald[290]: Journal started Oct 2 19:30:41.769682 systemd-journald[290]: Runtime Journal (/run/log/journal/29b71ed3e0314b80a286ec8bd3403e2e) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:30:41.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.754030 systemd-modules-load[291]: Inserted module 'overlay' Oct 2 19:30:41.772312 systemd[1]: Started systemd-journald.service. Oct 2 19:30:41.772331 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:30:41.776444 kernel: audit: type=1130 audit(1696275041.772:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.776476 kernel: Bridge firewalling registered Oct 2 19:30:41.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.775359 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 2 19:30:41.778790 systemd-resolved[292]: Positive Trust Anchors: Oct 2 19:30:41.778804 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:30:41.778831 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:30:41.788087 kernel: SCSI subsystem initialized Oct 2 19:30:41.783791 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 2 19:30:41.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.787174 systemd[1]: Started systemd-resolved.service. Oct 2 19:30:41.788145 systemd[1]: Reached target nss-lookup.target. Oct 2 19:30:41.792369 kernel: audit: type=1130 audit(1696275041.787:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.794116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:30:41.794150 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:30:41.795117 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:30:41.795226 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:30:41.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.798392 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:30:41.799489 kernel: audit: type=1130 audit(1696275041.795:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.799840 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 2 19:30:41.801392 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:30:41.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.802836 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:30:41.805492 kernel: audit: type=1130 audit(1696275041.801:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.812285 dracut-cmdline[309]: dracut-dracut-053 Oct 2 19:30:41.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.812270 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:30:41.815610 kernel: audit: type=1130 audit(1696275041.812:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.816511 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:30:41.893134 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:30:41.905482 kernel: iscsi: registered transport (tcp) Oct 2 19:30:41.921956 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:30:41.922014 kernel: QLogic iSCSI HBA Driver Oct 2 19:30:41.970642 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:30:41.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.972176 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:30:41.974293 kernel: audit: type=1130 audit(1696275041.971:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.022127 kernel: raid6: neonx8 gen() 13582 MB/s Oct 2 19:30:42.039117 kernel: raid6: neonx8 xor() 10783 MB/s Oct 2 19:30:42.056115 kernel: raid6: neonx4 gen() 13533 MB/s Oct 2 19:30:42.073109 kernel: raid6: neonx4 xor() 11137 MB/s Oct 2 19:30:42.090113 kernel: raid6: neonx2 gen() 12682 MB/s Oct 2 19:30:42.107114 kernel: raid6: neonx2 xor() 9834 MB/s Oct 2 19:30:42.124115 kernel: raid6: neonx1 gen() 10299 MB/s Oct 2 19:30:42.141116 kernel: raid6: neonx1 xor() 8653 MB/s Oct 2 19:30:42.160120 kernel: raid6: int64x8 gen() 6150 MB/s Oct 2 19:30:42.175119 kernel: raid6: int64x8 xor() 3533 MB/s Oct 2 19:30:42.192111 kernel: raid6: int64x4 gen() 7164 MB/s Oct 2 19:30:42.209111 kernel: raid6: int64x4 xor() 3810 MB/s Oct 2 19:30:42.226137 kernel: raid6: int64x2 gen() 6112 MB/s Oct 2 19:30:42.243689 kernel: raid6: int64x2 xor() 3303 MB/s Oct 2 19:30:42.260161 kernel: raid6: int64x1 gen() 5041 MB/s Oct 2 19:30:42.277305 kernel: raid6: int64x1 xor() 2645 MB/s Oct 2 19:30:42.277359 kernel: raid6: using algorithm neonx8 gen() 13582 MB/s Oct 2 19:30:42.277368 kernel: raid6: .... xor() 10783 MB/s, rmw enabled Oct 2 19:30:42.277377 kernel: raid6: using neon recovery algorithm Oct 2 19:30:42.288124 kernel: xor: measuring software checksum speed Oct 2 19:30:42.288184 kernel: 8regs : 17297 MB/sec Oct 2 19:30:42.289328 kernel: 32regs : 20755 MB/sec Oct 2 19:30:42.290510 kernel: arm64_neon : 27816 MB/sec Oct 2 19:30:42.290546 kernel: xor: using function: arm64_neon (27816 MB/sec) Oct 2 19:30:42.345141 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:30:42.360438 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:30:42.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.363000 audit: BPF prog-id=7 op=LOAD Oct 2 19:30:42.363000 audit: BPF prog-id=8 op=LOAD Oct 2 19:30:42.364116 kernel: audit: type=1130 audit(1696275042.360:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.364143 kernel: audit: type=1334 audit(1696275042.363:10): prog-id=7 op=LOAD Oct 2 19:30:42.364246 systemd[1]: Starting systemd-udevd.service... Oct 2 19:30:42.381190 systemd-udevd[492]: Using default interface naming scheme 'v252'. Oct 2 19:30:42.384655 systemd[1]: Started systemd-udevd.service. Oct 2 19:30:42.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.387574 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:30:42.402470 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Oct 2 19:30:42.439191 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:30:42.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.440598 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:30:42.478989 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:30:42.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.512474 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:30:42.517126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:42.547987 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:30:42.549925 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (553) Oct 2 19:30:42.551874 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:30:42.557441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:30:42.558146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:30:42.562051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:30:42.563534 systemd[1]: Starting disk-uuid.service... Oct 2 19:30:42.573142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:43.591999 disk-uuid[567]: The operation has completed successfully. Oct 2 19:30:43.592931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:43.632754 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:30:43.633772 systemd[1]: Finished disk-uuid.service. Oct 2 19:30:43.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.637004 systemd[1]: Starting verity-setup.service... Oct 2 19:30:43.659137 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:30:43.686389 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:30:43.688759 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:30:43.690359 systemd[1]: Finished verity-setup.service. Oct 2 19:30:43.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.766237 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:30:43.766652 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:30:43.767328 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:30:43.768065 systemd[1]: Starting ignition-setup.service... Oct 2 19:30:43.769916 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:30:43.778402 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:30:43.778452 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:30:43.779107 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:30:43.790586 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:30:43.799059 systemd[1]: Finished ignition-setup.service. Oct 2 19:30:43.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.800635 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:30:43.886582 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:30:43.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.888000 audit: BPF prog-id=9 op=LOAD Oct 2 19:30:43.889082 systemd[1]: Starting systemd-networkd.service... Oct 2 19:30:43.908196 ignition[654]: Ignition 2.14.0 Oct 2 19:30:43.908208 ignition[654]: Stage: fetch-offline Oct 2 19:30:43.908253 ignition[654]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:43.908263 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:43.908413 ignition[654]: parsed url from cmdline: "" Oct 2 19:30:43.908417 ignition[654]: no config URL provided Oct 2 19:30:43.908421 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:30:43.908430 ignition[654]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:30:43.908450 ignition[654]: op(1): [started] loading QEMU firmware config module Oct 2 19:30:43.908455 ignition[654]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:30:43.917610 ignition[654]: op(1): [finished] loading QEMU firmware config module Oct 2 19:30:43.920928 systemd-networkd[744]: lo: Link UP Oct 2 19:30:43.920940 systemd-networkd[744]: lo: Gained carrier Oct 2 19:30:43.921330 systemd-networkd[744]: Enumeration completed Oct 2 19:30:43.921535 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:30:43.922827 systemd-networkd[744]: eth0: Link UP Oct 2 19:30:43.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.922831 systemd-networkd[744]: eth0: Gained carrier Oct 2 19:30:43.924219 systemd[1]: Started systemd-networkd.service. Oct 2 19:30:43.925279 systemd[1]: Reached target network.target. Oct 2 19:30:43.927019 systemd[1]: Starting iscsiuio.service... Oct 2 19:30:43.936934 systemd[1]: Started iscsiuio.service. Oct 2 19:30:43.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.938472 systemd[1]: Starting iscsid.service... Oct 2 19:30:43.942719 iscsid[750]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:30:43.942719 iscsid[750]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:30:43.942719 iscsid[750]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:30:43.942719 iscsid[750]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:30:43.942719 iscsid[750]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:30:43.942719 iscsid[750]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:30:43.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.945010 ignition[654]: parsing config with SHA512: 610668ce0b9621e1c8e3a29849a7e6ba37645754f592218c9e63e36be164ec1fb5399cd4f570b010705785e29c07c645fbe11962958440e3013eda622b057645 Oct 2 19:30:43.946312 systemd[1]: Started iscsid.service. Oct 2 19:30:43.949762 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:30:43.964968 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:30:43.965439 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:30:43.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.966740 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:30:43.967871 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:30:43.969344 systemd[1]: Reached target remote-fs.target. Oct 2 19:30:43.971811 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:30:43.973933 unknown[654]: fetched base config from "system" Oct 2 19:30:43.973945 unknown[654]: fetched user config from "qemu" Oct 2 19:30:43.974347 ignition[654]: fetch-offline: fetch-offline passed Oct 2 19:30:43.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.975453 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:30:43.974415 ignition[654]: Ignition finished successfully Oct 2 19:30:43.976816 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:30:43.977632 systemd[1]: Starting ignition-kargs.service... Oct 2 19:30:43.983798 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:30:43.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.988878 ignition[761]: Ignition 2.14.0 Oct 2 19:30:43.988889 ignition[761]: Stage: kargs Oct 2 19:30:43.988992 ignition[761]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:43.991271 systemd[1]: Finished ignition-kargs.service. Oct 2 19:30:43.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.989002 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:43.989822 ignition[761]: kargs: kargs passed Oct 2 19:30:43.993329 systemd[1]: Starting ignition-disks.service... Oct 2 19:30:43.989875 ignition[761]: Ignition finished successfully Oct 2 19:30:44.002160 ignition[771]: Ignition 2.14.0 Oct 2 19:30:44.002170 ignition[771]: Stage: disks Oct 2 19:30:44.002276 ignition[771]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:44.004036 systemd[1]: Finished ignition-disks.service. Oct 2 19:30:44.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:44.002286 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:44.003111 ignition[771]: disks: disks passed Oct 2 19:30:44.005889 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:30:44.003179 ignition[771]: Ignition finished successfully Oct 2 19:30:44.006879 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:30:44.008572 systemd[1]: Reached target local-fs.target. Oct 2 19:30:44.009569 systemd[1]: Reached target sysinit.target. Oct 2 19:30:44.010715 systemd[1]: Reached target basic.target. Oct 2 19:30:44.012843 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:30:44.024721 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.13 Oct 2 19:30:44.024737 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 2 19:30:44.026906 systemd-fsck[779]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:30:44.030739 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:30:44.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:44.032594 systemd[1]: Mounting sysroot.mount... Oct 2 19:30:44.042112 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:30:44.042247 systemd[1]: Mounted sysroot.mount. Oct 2 19:30:44.042836 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:30:44.044970 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:30:44.045741 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:30:44.045785 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:30:44.045811 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:30:44.048551 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:30:44.050036 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:30:44.055770 initrd-setup-root[789]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:30:44.060813 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:30:44.066146 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:30:44.070193 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:30:44.103930 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:30:44.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:44.105504 systemd[1]: Starting ignition-mount.service... Oct 2 19:30:44.106843 systemd[1]: Starting sysroot-boot.service... Oct 2 19:30:44.113010 bash[830]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:30:44.124430 ignition[832]: INFO : Ignition 2.14.0 Oct 2 19:30:44.124430 ignition[832]: INFO : Stage: mount Oct 2 19:30:44.125883 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:44.125883 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:44.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:44.126374 systemd[1]: Finished sysroot-boot.service. Oct 2 19:30:44.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:44.128730 ignition[832]: INFO : mount: mount passed Oct 2 19:30:44.128730 ignition[832]: INFO : Ignition finished successfully Oct 2 19:30:44.127691 systemd[1]: Finished ignition-mount.service. Oct 2 19:30:44.281259 systemd-resolved[292]: Detected conflict on linux10 IN A 10.0.0.13 Oct 2 19:30:44.281278 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux10' to 'linux18'. Oct 2 19:30:44.698782 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:30:44.707143 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (840) Oct 2 19:30:44.710199 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:30:44.710215 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:30:44.710225 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:30:44.730217 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:30:44.731861 systemd[1]: Starting ignition-files.service... Oct 2 19:30:44.751212 ignition[860]: INFO : Ignition 2.14.0 Oct 2 19:30:44.751212 ignition[860]: INFO : Stage: files Oct 2 19:30:44.753406 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:44.753406 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:44.756329 ignition[860]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:30:44.761050 ignition[860]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:30:44.761050 ignition[860]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:30:44.763938 ignition[860]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:30:44.765482 ignition[860]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:30:44.767630 unknown[860]: wrote ssh authorized keys file for user: core Oct 2 19:30:44.769207 ignition[860]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:30:44.770860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:30:44.770860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Oct 2 19:30:45.263665 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:30:45.400240 systemd-networkd[744]: eth0: Gained IPv6LL Oct 2 19:30:45.582879 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Oct 2 19:30:45.582879 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Oct 2 19:30:45.586404 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:30:45.586404 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:30:45.737042 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:30:45.861133 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Oct 2 19:30:45.863625 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Oct 2 19:30:45.863625 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:30:45.863625 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:30:45.921828 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:30:46.235008 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Oct 2 19:30:46.235008 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:30:46.239835 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:30:46.239835 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:30:46.281716 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:30:46.957157 ignition[860]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Oct 2 19:30:46.959860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:30:46.959860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:30:46.959860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:30:46.959860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:30:46.959860 ignition[860]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:30:46.959860 ignition[860]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:30:46.984736 ignition[860]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:30:46.984736 ignition[860]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:30:46.984736 ignition[860]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:30:46.984736 ignition[860]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:30:46.984736 ignition[860]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:30:47.000298 ignition[860]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:30:47.002439 ignition[860]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:30:47.002439 ignition[860]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:30:47.002439 ignition[860]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:30:47.002439 ignition[860]: INFO : files: files passed Oct 2 19:30:47.002439 ignition[860]: INFO : Ignition finished successfully Oct 2 19:30:47.011129 kernel: kauditd_printk_skb: 22 callbacks suppressed Oct 2 19:30:47.011152 kernel: audit: type=1130 audit(1696275047.004:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.002554 systemd[1]: Finished ignition-files.service. Oct 2 19:30:47.005547 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:30:47.013111 initrd-setup-root-after-ignition[884]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:30:47.009962 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:30:47.015717 initrd-setup-root-after-ignition[887]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:30:47.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.010863 systemd[1]: Starting ignition-quench.service... Oct 2 19:30:47.015181 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:30:47.020646 kernel: audit: type=1130 audit(1696275047.016:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.016526 systemd[1]: Reached target ignition-complete.target. Oct 2 19:30:47.021995 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:30:47.022787 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:30:47.027961 kernel: audit: type=1130 audit(1696275047.023:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.027985 kernel: audit: type=1131 audit(1696275047.023:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.022895 systemd[1]: Finished ignition-quench.service. Oct 2 19:30:47.038817 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:30:47.038939 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:30:47.044285 kernel: audit: type=1130 audit(1696275047.040:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.044315 kernel: audit: type=1131 audit(1696275047.040:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.040352 systemd[1]: Reached target initrd-fs.target. Oct 2 19:30:47.044927 systemd[1]: Reached target initrd.target. Oct 2 19:30:47.046109 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:30:47.046934 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:30:47.059873 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:30:47.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.061471 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:30:47.063839 kernel: audit: type=1130 audit(1696275047.060:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.071762 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:30:47.072663 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:30:47.074992 systemd[1]: Stopped target timers.target. Oct 2 19:30:47.077355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:30:47.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.077485 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:30:47.081893 kernel: audit: type=1131 audit(1696275047.078:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.078444 systemd[1]: Stopped target initrd.target. Oct 2 19:30:47.081447 systemd[1]: Stopped target basic.target. Oct 2 19:30:47.082496 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:30:47.083574 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:30:47.084663 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:30:47.085861 systemd[1]: Stopped target remote-fs.target. Oct 2 19:30:47.087009 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:30:47.088167 systemd[1]: Stopped target sysinit.target. Oct 2 19:30:47.089201 systemd[1]: Stopped target local-fs.target. Oct 2 19:30:47.090374 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:30:47.091700 systemd[1]: Stopped target swap.target. Oct 2 19:30:47.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.092599 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:30:47.097457 kernel: audit: type=1131 audit(1696275047.093:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.092719 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:30:47.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.094456 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:30:47.101560 kernel: audit: type=1131 audit(1696275047.098:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.096872 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:30:47.096986 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:30:47.098241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:30:47.098341 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:30:47.101187 systemd[1]: Stopped target paths.target. Oct 2 19:30:47.102121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:30:47.106142 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:30:47.107515 systemd[1]: Stopped target slices.target. Oct 2 19:30:47.108232 systemd[1]: Stopped target sockets.target. Oct 2 19:30:47.109273 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:30:47.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.109450 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:30:47.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.110449 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:30:47.110543 systemd[1]: Stopped ignition-files.service. Oct 2 19:30:47.112659 systemd[1]: Stopping ignition-mount.service... Oct 2 19:30:47.115725 iscsid[750]: iscsid shutting down. Oct 2 19:30:47.113739 systemd[1]: Stopping iscsid.service... Oct 2 19:30:47.116285 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:30:47.116848 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:30:47.116994 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:30:47.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.118382 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:30:47.118488 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:30:47.122168 ignition[901]: INFO : Ignition 2.14.0 Oct 2 19:30:47.122168 ignition[901]: INFO : Stage: umount Oct 2 19:30:47.122168 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:47.122168 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:47.122168 ignition[901]: INFO : umount: umount passed Oct 2 19:30:47.122168 ignition[901]: INFO : Ignition finished successfully Oct 2 19:30:47.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.121274 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:30:47.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.122014 systemd[1]: Stopped iscsid.service. Oct 2 19:30:47.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.123234 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:30:47.123330 systemd[1]: Closed iscsid.socket. Oct 2 19:30:47.124937 systemd[1]: Stopping iscsiuio.service... Oct 2 19:30:47.126608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:30:47.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.126698 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:30:47.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.127689 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:30:47.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.127774 systemd[1]: Stopped iscsiuio.service. Oct 2 19:30:47.128807 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:30:47.128882 systemd[1]: Stopped ignition-mount.service. Oct 2 19:30:47.130718 systemd[1]: Stopped target network.target. Oct 2 19:30:47.131777 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:30:47.131822 systemd[1]: Closed iscsiuio.socket. Oct 2 19:30:47.132813 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:30:47.132884 systemd[1]: Stopped ignition-disks.service. Oct 2 19:30:47.133947 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:30:47.133988 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:30:47.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.135292 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:30:47.135329 systemd[1]: Stopped ignition-setup.service. Oct 2 19:30:47.136600 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:30:47.138055 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:30:47.141761 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:30:47.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.145172 systemd-networkd[744]: eth0: DHCPv6 lease lost Oct 2 19:30:47.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.146553 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:30:47.155000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:30:47.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.146654 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:30:47.148121 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:30:47.148152 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:30:47.150622 systemd[1]: Stopping network-cleanup.service... Oct 2 19:30:47.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.151319 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:30:47.151398 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:30:47.152530 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:30:47.152568 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:30:47.154288 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:30:47.167000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:30:47.154333 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:30:47.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.155446 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:30:47.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.160916 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:30:47.161493 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:30:47.161589 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:30:47.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.167345 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:30:47.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.167460 systemd[1]: Stopped network-cleanup.service. Oct 2 19:30:47.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.169432 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:30:47.169554 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:30:47.170707 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:30:47.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.170741 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:30:47.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.171643 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:30:47.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.171676 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:30:47.173077 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:30:47.173133 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:30:47.174470 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:30:47.174504 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:30:47.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.175689 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:30:47.175728 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:30:47.177709 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:30:47.178837 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:30:47.178897 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:30:47.180837 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:30:47.180882 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:30:47.181596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:30:47.181631 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:30:47.183745 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:30:47.185566 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:30:47.185650 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:30:47.208583 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:30:47.208687 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:30:47.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.210011 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:30:47.210875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:30:47.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.210927 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:30:47.212820 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:30:47.220706 systemd[1]: Switching root. Oct 2 19:30:47.234543 systemd-journald[290]: Journal stopped Oct 2 19:30:49.363728 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Oct 2 19:30:49.363864 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:30:49.363877 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:30:49.363888 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:30:49.363898 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:30:49.363908 kernel: SELinux: policy capability open_perms=1 Oct 2 19:30:49.363920 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:30:49.363931 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:30:49.363941 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:30:49.363960 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:30:49.363984 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:30:49.363994 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:30:49.364004 systemd[1]: Successfully loaded SELinux policy in 33.114ms. Oct 2 19:30:49.364018 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.609ms. Oct 2 19:30:49.364029 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:30:49.364041 systemd[1]: Detected virtualization kvm. Oct 2 19:30:49.364051 systemd[1]: Detected architecture arm64. Oct 2 19:30:49.364062 systemd[1]: Detected first boot. Oct 2 19:30:49.364073 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:30:49.364085 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:30:49.364105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:30:49.364117 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:30:49.364129 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:30:49.364140 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:30:49.364154 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:30:49.364169 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:30:49.364188 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:30:49.364200 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:30:49.364214 systemd[1]: Created slice system-getty.slice. Oct 2 19:30:49.364225 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:30:49.364236 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:30:49.364247 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:30:49.364257 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:30:49.364269 systemd[1]: Created slice user.slice. Oct 2 19:30:49.364279 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:30:49.364290 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:30:49.364300 systemd[1]: Set up automount boot.automount. Oct 2 19:30:49.364311 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:30:49.364321 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:30:49.364331 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:30:49.364344 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:30:49.364363 systemd[1]: Reached target integritysetup.target. Oct 2 19:30:49.364375 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:30:49.364386 systemd[1]: Reached target remote-fs.target. Oct 2 19:30:49.364397 systemd[1]: Reached target slices.target. Oct 2 19:30:49.364408 systemd[1]: Reached target swap.target. Oct 2 19:30:49.364418 systemd[1]: Reached target torcx.target. Oct 2 19:30:49.364429 systemd[1]: Reached target veritysetup.target. Oct 2 19:30:49.364439 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:30:49.364449 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:30:49.364461 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:30:49.364473 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:30:49.364485 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:30:49.364495 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:30:49.364506 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:30:49.364517 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:30:49.364528 systemd[1]: Mounting media.mount... Oct 2 19:30:49.364541 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:30:49.364553 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:30:49.364565 systemd[1]: Mounting tmp.mount... Oct 2 19:30:49.364577 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:30:49.364590 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:30:49.364601 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:30:49.364612 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:30:49.364623 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:30:49.364634 systemd[1]: Starting modprobe@drm.service... Oct 2 19:30:49.364646 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:30:49.364657 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:30:49.364669 systemd[1]: Starting modprobe@loop.service... Oct 2 19:30:49.364680 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:30:49.364694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:30:49.364720 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:30:49.364732 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:30:49.364743 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:30:49.364801 systemd[1]: Stopped systemd-journald.service. Oct 2 19:30:49.364815 systemd[1]: Starting systemd-journald.service... Oct 2 19:30:49.364825 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:30:49.364836 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:30:49.364849 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:30:49.364863 kernel: fuse: init (API version 7.34) Oct 2 19:30:49.364874 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:30:49.364885 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:30:49.364896 systemd[1]: Stopped verity-setup.service. Oct 2 19:30:49.364906 kernel: loop: module loaded Oct 2 19:30:49.364916 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:30:49.364927 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:30:49.364937 systemd[1]: Mounted media.mount. Oct 2 19:30:49.364949 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:30:49.364967 systemd-journald[995]: Journal started Oct 2 19:30:49.365013 systemd-journald[995]: Runtime Journal (/run/log/journal/29b71ed3e0314b80a286ec8bd3403e2e) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:30:47.317000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:30:47.489000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:30:47.489000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:30:47.489000 audit: BPF prog-id=10 op=LOAD Oct 2 19:30:47.489000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:30:47.489000 audit: BPF prog-id=11 op=LOAD Oct 2 19:30:47.489000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:30:49.249000 audit: BPF prog-id=12 op=LOAD Oct 2 19:30:49.249000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:30:49.249000 audit: BPF prog-id=13 op=LOAD Oct 2 19:30:49.249000 audit: BPF prog-id=14 op=LOAD Oct 2 19:30:49.249000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:30:49.249000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:30:49.250000 audit: BPF prog-id=15 op=LOAD Oct 2 19:30:49.250000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:30:49.250000 audit: BPF prog-id=16 op=LOAD Oct 2 19:30:49.250000 audit: BPF prog-id=17 op=LOAD Oct 2 19:30:49.250000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:30:49.251000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:30:49.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.260000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:30:49.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.335000 audit: BPF prog-id=18 op=LOAD Oct 2 19:30:49.335000 audit: BPF prog-id=19 op=LOAD Oct 2 19:30:49.335000 audit: BPF prog-id=20 op=LOAD Oct 2 19:30:49.335000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:30:49.335000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:30:49.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.362000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:30:49.362000 audit[995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd839d050 a2=4000 a3=1 items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:49.362000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:30:47.533562 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:30:49.246771 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:30:49.366455 systemd[1]: Started systemd-journald.service. Oct 2 19:30:47.534174 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:30:49.246784 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:30:47.534193 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:30:49.251630 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:30:49.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:47.534224 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:30:49.366811 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:30:47.534234 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:30:47.534264 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:30:47.534275 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:30:47.534492 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:30:47.534527 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:30:47.534539 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:30:47.534998 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:30:47.535032 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:30:47.535050 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:30:47.535064 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:30:47.535081 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:30:49.367709 systemd[1]: Mounted tmp.mount. Oct 2 19:30:47.535116 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:30:48.988743 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:48.989011 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:48.989127 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:48.989290 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:48.989338 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:30:48.989405 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:30:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:30:49.368719 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:30:49.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.369617 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:30:49.370244 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:30:49.371084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:30:49.371255 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:30:49.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.372061 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:30:49.372245 systemd[1]: Finished modprobe@drm.service. Oct 2 19:30:49.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.373327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:30:49.373499 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:30:49.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.374362 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:30:49.374601 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:30:49.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.375569 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:30:49.375719 systemd[1]: Finished modprobe@loop.service. Oct 2 19:30:49.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.376818 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:30:49.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.377750 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:30:49.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.378719 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:30:49.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.379791 systemd[1]: Reached target network-pre.target. Oct 2 19:30:49.381687 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:30:49.383628 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:30:49.384189 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:30:49.386194 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:30:49.387979 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:30:49.392350 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:30:49.393423 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:30:49.394228 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:30:49.395318 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:30:49.399554 systemd-journald[995]: Time spent on flushing to /var/log/journal/29b71ed3e0314b80a286ec8bd3403e2e is 12.458ms for 987 entries. Oct 2 19:30:49.399554 systemd-journald[995]: System Journal (/var/log/journal/29b71ed3e0314b80a286ec8bd3403e2e) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:30:49.434370 systemd-journald[995]: Received client request to flush runtime journal. Oct 2 19:30:49.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.399636 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:30:49.401463 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:30:49.435006 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:30:49.402675 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:30:49.404788 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:30:49.407932 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:30:49.408727 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:30:49.410841 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:30:49.412967 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:30:49.435879 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:30:49.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.436923 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:30:49.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.437825 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:30:49.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.439935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:30:49.457090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:30:49.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.807040 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:30:49.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.808000 audit: BPF prog-id=21 op=LOAD Oct 2 19:30:49.808000 audit: BPF prog-id=22 op=LOAD Oct 2 19:30:49.808000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:30:49.808000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:30:49.809406 systemd[1]: Starting systemd-udevd.service... Oct 2 19:30:49.843802 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Oct 2 19:30:49.872045 systemd[1]: Started systemd-udevd.service. Oct 2 19:30:49.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.875000 audit: BPF prog-id=23 op=LOAD Oct 2 19:30:49.879111 systemd[1]: Starting systemd-networkd.service... Oct 2 19:30:49.887000 audit: BPF prog-id=24 op=LOAD Oct 2 19:30:49.887000 audit: BPF prog-id=25 op=LOAD Oct 2 19:30:49.887000 audit: BPF prog-id=26 op=LOAD Oct 2 19:30:49.888666 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:30:49.897271 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:30:49.955620 systemd[1]: Started systemd-userdbd.service. Oct 2 19:30:49.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.969025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:30:49.989543 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:30:49.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.991554 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:30:50.021111 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:30:50.034513 systemd-networkd[1050]: lo: Link UP Oct 2 19:30:50.034522 systemd-networkd[1050]: lo: Gained carrier Oct 2 19:30:50.034876 systemd-networkd[1050]: Enumeration completed Oct 2 19:30:50.034994 systemd-networkd[1050]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:30:50.035034 systemd[1]: Started systemd-networkd.service. Oct 2 19:30:50.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.036396 systemd-networkd[1050]: eth0: Link UP Oct 2 19:30:50.036404 systemd-networkd[1050]: eth0: Gained carrier Oct 2 19:30:50.056247 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:30:50.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.057221 systemd[1]: Reached target cryptsetup.target. Oct 2 19:30:50.059132 systemd[1]: Starting lvm2-activation.service... Oct 2 19:30:50.063762 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:30:50.064228 systemd-networkd[1050]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:30:50.100175 systemd[1]: Finished lvm2-activation.service. Oct 2 19:30:50.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.101017 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:30:50.101736 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:30:50.101772 systemd[1]: Reached target local-fs.target. Oct 2 19:30:50.102340 systemd[1]: Reached target machines.target. Oct 2 19:30:50.104402 systemd[1]: Starting ldconfig.service... Oct 2 19:30:50.105391 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.105456 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:30:50.106840 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:30:50.110617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:30:50.112757 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:30:50.113834 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.113880 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.115068 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:30:50.120792 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Oct 2 19:30:50.122087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:30:50.128550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:30:50.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.132799 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:30:50.199718 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:30:50.201744 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:30:50.218161 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:30:50.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.254692 systemd-fsck[1082]: fsck.fat 4.2 (2021-01-31) Oct 2 19:30:50.254692 systemd-fsck[1082]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:30:50.257219 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:30:50.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.354309 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:30:50.356743 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:30:50.358131 systemd[1]: Mounting boot.mount... Oct 2 19:30:50.358942 systemd[1]: Finished ldconfig.service. Oct 2 19:30:50.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.367036 systemd[1]: Mounted boot.mount. Oct 2 19:30:50.374137 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:30:50.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.434835 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:30:50.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.436907 systemd[1]: Starting audit-rules.service... Oct 2 19:30:50.438466 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:30:50.442000 audit: BPF prog-id=27 op=LOAD Oct 2 19:30:50.440068 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:30:50.443701 systemd[1]: Starting systemd-resolved.service... Oct 2 19:30:50.445000 audit: BPF prog-id=28 op=LOAD Oct 2 19:30:50.446530 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:30:50.448377 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:30:50.449626 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:30:50.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.450572 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:30:50.456000 audit[1096]: SYSTEM_BOOT pid=1096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.461026 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:30:50.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.473471 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:30:50.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.475778 systemd[1]: Starting systemd-update-done.service... Oct 2 19:30:50.483316 systemd[1]: Finished systemd-update-done.service. Oct 2 19:30:50.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:50.493000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:30:50.493000 audit[1107]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9dcdfe0 a2=420 a3=0 items=0 ppid=1085 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:50.493000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:30:50.494560 augenrules[1107]: No rules Oct 2 19:30:50.495693 systemd[1]: Finished audit-rules.service. Oct 2 19:30:50.515426 systemd-resolved[1089]: Positive Trust Anchors: Oct 2 19:30:50.515441 systemd-resolved[1089]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:30:50.515469 systemd-resolved[1089]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:30:50.520709 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:30:50.521565 systemd-timesyncd[1095]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:30:50.521618 systemd-timesyncd[1095]: Initial clock synchronization to Mon 2023-10-02 19:30:50.129867 UTC. Oct 2 19:30:50.521760 systemd[1]: Reached target time-set.target. Oct 2 19:30:50.527295 systemd-resolved[1089]: Defaulting to hostname 'linux'. Oct 2 19:30:50.537906 systemd[1]: Started systemd-resolved.service. Oct 2 19:30:50.538574 systemd[1]: Reached target network.target. Oct 2 19:30:50.539083 systemd[1]: Reached target nss-lookup.target. Oct 2 19:30:50.539624 systemd[1]: Reached target sysinit.target. Oct 2 19:30:50.540224 systemd[1]: Started motdgen.path. Oct 2 19:30:50.540725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:30:50.541615 systemd[1]: Started logrotate.timer. Oct 2 19:30:50.542235 systemd[1]: Started mdadm.timer. Oct 2 19:30:50.542717 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:30:50.543308 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:30:50.543337 systemd[1]: Reached target paths.target. Oct 2 19:30:50.543836 systemd[1]: Reached target timers.target. Oct 2 19:30:50.544736 systemd[1]: Listening on dbus.socket. Oct 2 19:30:50.546367 systemd[1]: Starting docker.socket... Oct 2 19:30:50.550385 systemd[1]: Listening on sshd.socket. Oct 2 19:30:50.551025 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:30:50.551636 systemd[1]: Listening on docker.socket. Oct 2 19:30:50.552244 systemd[1]: Reached target sockets.target. Oct 2 19:30:50.552776 systemd[1]: Reached target basic.target. Oct 2 19:30:50.553328 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.553365 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.554495 systemd[1]: Starting containerd.service... Oct 2 19:30:50.556191 systemd[1]: Starting dbus.service... Oct 2 19:30:50.557655 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:30:50.559514 systemd[1]: Starting extend-filesystems.service... Oct 2 19:30:50.560138 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:30:50.561268 systemd[1]: Starting motdgen.service... Oct 2 19:30:50.563460 jq[1117]: false Oct 2 19:30:50.563790 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:30:50.568998 systemd[1]: Starting prepare-critools.service... Oct 2 19:30:50.570842 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:30:50.572555 systemd[1]: Starting sshd-keygen.service... Oct 2 19:30:50.575204 systemd[1]: Starting systemd-logind.service... Oct 2 19:30:50.575833 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:30:50.575896 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:30:50.576804 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:30:50.577591 systemd[1]: Starting update-engine.service... Oct 2 19:30:50.579704 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:30:50.582632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:30:50.594253 extend-filesystems[1118]: Found vda Oct 2 19:30:50.582816 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:30:50.585434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:30:50.585607 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:30:50.595459 jq[1132]: true Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda1 Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda2 Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda3 Oct 2 19:30:50.596706 extend-filesystems[1118]: Found usr Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda4 Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda6 Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda7 Oct 2 19:30:50.596706 extend-filesystems[1118]: Found vda9 Oct 2 19:30:50.596706 extend-filesystems[1118]: Checking size of /dev/vda9 Oct 2 19:30:50.603774 jq[1142]: true Oct 2 19:30:50.603868 tar[1135]: ./ Oct 2 19:30:50.603868 tar[1135]: ./loopback Oct 2 19:30:50.604043 tar[1136]: crictl Oct 2 19:30:50.608236 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:30:50.608445 systemd[1]: Finished motdgen.service. Oct 2 19:30:50.624719 dbus-daemon[1116]: [system] SELinux support is enabled Oct 2 19:30:50.624902 systemd[1]: Started dbus.service. Oct 2 19:30:50.630156 extend-filesystems[1118]: Old size kept for /dev/vda9 Oct 2 19:30:50.627992 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:30:50.628020 systemd[1]: Reached target system-config.target. Oct 2 19:30:50.628723 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:30:50.628737 systemd[1]: Reached target user-config.target. Oct 2 19:30:50.636266 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:30:50.636448 systemd[1]: Finished extend-filesystems.service. Oct 2 19:30:50.642565 systemd-logind[1129]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:30:50.646515 systemd-logind[1129]: New seat seat0. Oct 2 19:30:50.648939 systemd[1]: Started systemd-logind.service. Oct 2 19:30:50.681254 tar[1135]: ./bandwidth Oct 2 19:30:50.728142 bash[1169]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:30:50.734640 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:30:50.752694 tar[1135]: ./ptp Oct 2 19:30:50.835711 env[1141]: time="2023-10-02T19:30:50.835569120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:30:50.856899 update_engine[1131]: I1002 19:30:50.856525 1131 main.cc:92] Flatcar Update Engine starting Oct 2 19:30:50.864291 env[1141]: time="2023-10-02T19:30:50.864203040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:30:50.864504 env[1141]: time="2023-10-02T19:30:50.864482400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.865985 env[1141]: time="2023-10-02T19:30:50.865943040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866020 env[1141]: time="2023-10-02T19:30:50.865985400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866261 env[1141]: time="2023-10-02T19:30:50.866238000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866314 env[1141]: time="2023-10-02T19:30:50.866261360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866314 env[1141]: time="2023-10-02T19:30:50.866275360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:30:50.866314 env[1141]: time="2023-10-02T19:30:50.866285000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866389 env[1141]: time="2023-10-02T19:30:50.866373320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866849 env[1141]: time="2023-10-02T19:30:50.866827320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866991 env[1141]: time="2023-10-02T19:30:50.866970200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:30:50.866991 env[1141]: time="2023-10-02T19:30:50.866989120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:30:50.867076 env[1141]: time="2023-10-02T19:30:50.867057760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:30:50.867115 env[1141]: time="2023-10-02T19:30:50.867075280Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:30:50.867510 tar[1135]: ./vlan Oct 2 19:30:50.868288 systemd[1]: Started update-engine.service. Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872001840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872034960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872056080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872108680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872124280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872141320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872154040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872535 env[1141]: time="2023-10-02T19:30:50.872523520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.871425 systemd[1]: Started locksmithd.service. Oct 2 19:30:50.872766 env[1141]: time="2023-10-02T19:30:50.872545080Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872766 env[1141]: time="2023-10-02T19:30:50.872559600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872766 env[1141]: time="2023-10-02T19:30:50.872574200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.872766 env[1141]: time="2023-10-02T19:30:50.872587520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:30:50.872766 env[1141]: time="2023-10-02T19:30:50.872706600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:30:50.872868 env[1141]: time="2023-10-02T19:30:50.872785200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:30:50.873079 env[1141]: time="2023-10-02T19:30:50.873052400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.873136 env[1141]: time="2023-10-02T19:30:50.873086880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873173 env[1141]: time="2023-10-02T19:30:50.873136880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:30:50.873432 env[1141]: time="2023-10-02T19:30:50.873414520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873463 env[1141]: time="2023-10-02T19:30:50.873433840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873463 env[1141]: time="2023-10-02T19:30:50.873447240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873463 env[1141]: time="2023-10-02T19:30:50.873459600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873526 env[1141]: time="2023-10-02T19:30:50.873473960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873526 env[1141]: time="2023-10-02T19:30:50.873487800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873526 env[1141]: time="2023-10-02T19:30:50.873499320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873526 env[1141]: time="2023-10-02T19:30:50.873511360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873526 env[1141]: time="2023-10-02T19:30:50.873524520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:30:50.873673 env[1141]: time="2023-10-02T19:30:50.873654040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873702 env[1141]: time="2023-10-02T19:30:50.873675600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873702 env[1141]: time="2023-10-02T19:30:50.873690160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.873756 env[1141]: time="2023-10-02T19:30:50.873702000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:30:50.873756 env[1141]: time="2023-10-02T19:30:50.873716160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:30:50.873756 env[1141]: time="2023-10-02T19:30:50.873728680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:30:50.873756 env[1141]: time="2023-10-02T19:30:50.873746000Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:30:50.873838 env[1141]: time="2023-10-02T19:30:50.873781160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.874033 env[1141]: time="2023-10-02T19:30:50.873980960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:30:50.875200 env[1141]: time="2023-10-02T19:30:50.874042240Z" level=info msg="Connect containerd service" Oct 2 19:30:50.875200 env[1141]: time="2023-10-02T19:30:50.874107960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:30:50.875263 update_engine[1131]: I1002 19:30:50.874249 1131 update_check_scheduler.cc:74] Next update check in 3m57s Oct 2 19:30:50.876111 env[1141]: time="2023-10-02T19:30:50.876054760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:30:50.878482 env[1141]: time="2023-10-02T19:30:50.878433720Z" level=info msg="Start subscribing containerd event" Oct 2 19:30:50.878540 env[1141]: time="2023-10-02T19:30:50.878503920Z" level=info msg="Start recovering state" Oct 2 19:30:50.878924 env[1141]: time="2023-10-02T19:30:50.878581280Z" level=info msg="Start event monitor" Oct 2 19:30:50.884333 env[1141]: time="2023-10-02T19:30:50.884298560Z" level=info msg="Start snapshots syncer" Oct 2 19:30:50.884387 env[1141]: time="2023-10-02T19:30:50.884343280Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:30:50.884387 env[1141]: time="2023-10-02T19:30:50.884363320Z" level=info msg="Start streaming server" Oct 2 19:30:50.884648 env[1141]: time="2023-10-02T19:30:50.884625960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:30:50.884705 env[1141]: time="2023-10-02T19:30:50.884674280Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:30:50.884705 env[1141]: time="2023-10-02T19:30:50.884717680Z" level=info msg="containerd successfully booted in 0.049767s" Oct 2 19:30:50.884811 systemd[1]: Started containerd.service. Oct 2 19:30:50.915406 tar[1135]: ./host-device Oct 2 19:30:50.955312 tar[1135]: ./tuning Oct 2 19:30:50.980681 tar[1135]: ./vrf Oct 2 19:30:50.993462 locksmithd[1173]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:30:51.006484 tar[1135]: ./sbr Oct 2 19:30:51.030665 tar[1135]: ./tap Oct 2 19:30:51.058600 tar[1135]: ./dhcp Oct 2 19:30:51.096228 systemd-networkd[1050]: eth0: Gained IPv6LL Oct 2 19:30:51.127139 tar[1135]: ./static Oct 2 19:30:51.147425 tar[1135]: ./firewall Oct 2 19:30:51.164291 systemd[1]: Finished prepare-critools.service. Oct 2 19:30:51.177798 tar[1135]: ./macvlan Oct 2 19:30:51.205404 tar[1135]: ./dummy Oct 2 19:30:51.232547 tar[1135]: ./bridge Oct 2 19:30:51.262110 tar[1135]: ./ipvlan Oct 2 19:30:51.289208 tar[1135]: ./portmap Oct 2 19:30:51.314990 tar[1135]: ./host-local Oct 2 19:30:51.347719 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:30:51.573927 systemd[1]: Created slice system-sshd.slice. Oct 2 19:30:51.910404 sshd_keygen[1139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:30:51.929158 systemd[1]: Finished sshd-keygen.service. Oct 2 19:30:51.931161 systemd[1]: Starting issuegen.service... Oct 2 19:30:51.932731 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:41658.service. Oct 2 19:30:51.936832 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:30:51.936987 systemd[1]: Finished issuegen.service. Oct 2 19:30:51.939135 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:30:51.948714 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:30:51.951021 systemd[1]: Started getty@tty1.service. Oct 2 19:30:51.952934 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:30:51.953790 systemd[1]: Reached target getty.target. Oct 2 19:30:51.954434 systemd[1]: Reached target multi-user.target. Oct 2 19:30:51.956788 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:30:51.964676 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:30:51.964896 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:30:51.965784 systemd[1]: Startup finished in 627ms (kernel) + 5.697s (initrd) + 4.687s (userspace) = 11.012s. Oct 2 19:30:51.992419 sshd[1192]: Accepted publickey for core from 10.0.0.1 port 41658 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:51.995538 sshd[1192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.012229 systemd[1]: Created slice user-500.slice. Oct 2 19:30:52.013626 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:30:52.015841 systemd-logind[1129]: New session 1 of user core. Oct 2 19:30:52.022448 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:30:52.024369 systemd[1]: Starting user@500.service... Oct 2 19:30:52.028215 (systemd)[1201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.091750 systemd[1201]: Queued start job for default target default.target. Oct 2 19:30:52.092263 systemd[1201]: Reached target paths.target. Oct 2 19:30:52.092283 systemd[1201]: Reached target sockets.target. Oct 2 19:30:52.092294 systemd[1201]: Reached target timers.target. Oct 2 19:30:52.092304 systemd[1201]: Reached target basic.target. Oct 2 19:30:52.092355 systemd[1201]: Reached target default.target. Oct 2 19:30:52.092379 systemd[1201]: Startup finished in 57ms. Oct 2 19:30:52.092599 systemd[1]: Started user@500.service. Oct 2 19:30:52.093631 systemd[1]: Started session-1.scope. Oct 2 19:30:52.143847 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:41666.service. Oct 2 19:30:52.183896 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 41666 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.185218 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.189086 systemd-logind[1129]: New session 2 of user core. Oct 2 19:30:52.189503 systemd[1]: Started session-2.scope. Oct 2 19:30:52.245267 sshd[1210]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.248566 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:41668.service. Oct 2 19:30:52.250379 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:41666.service: Deactivated successfully. Oct 2 19:30:52.251114 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:30:52.251654 systemd-logind[1129]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:30:52.252494 systemd-logind[1129]: Removed session 2. Oct 2 19:30:52.280738 sshd[1215]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.281942 sshd[1215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.285042 systemd-logind[1129]: New session 3 of user core. Oct 2 19:30:52.285869 systemd[1]: Started session-3.scope. Oct 2 19:30:52.336418 sshd[1215]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.339463 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:41668.service: Deactivated successfully. Oct 2 19:30:52.340171 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:30:52.340683 systemd-logind[1129]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:30:52.341786 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:41680.service. Oct 2 19:30:52.342468 systemd-logind[1129]: Removed session 3. Oct 2 19:30:52.375963 sshd[1222]: Accepted publickey for core from 10.0.0.1 port 41680 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.377462 sshd[1222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.380716 systemd-logind[1129]: New session 4 of user core. Oct 2 19:30:52.381506 systemd[1]: Started session-4.scope. Oct 2 19:30:52.435995 sshd[1222]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.438578 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:41680.service: Deactivated successfully. Oct 2 19:30:52.439238 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:30:52.439702 systemd-logind[1129]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:30:52.440714 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:41684.service. Oct 2 19:30:52.441286 systemd-logind[1129]: Removed session 4. Oct 2 19:30:52.473911 sshd[1228]: Accepted publickey for core from 10.0.0.1 port 41684 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.475166 sshd[1228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.478492 systemd-logind[1129]: New session 5 of user core. Oct 2 19:30:52.479399 systemd[1]: Started session-5.scope. Oct 2 19:30:52.541547 sudo[1231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:30:52.541757 sudo[1231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.554369 dbus-daemon[1116]: avc: received setenforce notice (enforcing=1) Oct 2 19:30:52.556197 sudo[1231]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:52.558384 sshd[1228]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.562278 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:41684.service: Deactivated successfully. Oct 2 19:30:52.563060 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:30:52.563708 systemd-logind[1129]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:30:52.564984 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:41686.service. Oct 2 19:30:52.565665 systemd-logind[1129]: Removed session 5. Oct 2 19:30:52.599519 sshd[1235]: Accepted publickey for core from 10.0.0.1 port 41686 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.600915 sshd[1235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.604223 systemd-logind[1129]: New session 6 of user core. Oct 2 19:30:52.605015 systemd[1]: Started session-6.scope. Oct 2 19:30:52.655940 sudo[1239]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:30:52.656157 sudo[1239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.658899 sudo[1239]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:52.663464 sudo[1238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:30:52.663651 sudo[1238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.672426 systemd[1]: Stopping audit-rules.service... Oct 2 19:30:52.672000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:30:52.674028 auditctl[1242]: No rules Oct 2 19:30:52.674312 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:30:52.674392 kernel: kauditd_printk_skb: 123 callbacks suppressed Oct 2 19:30:52.674427 kernel: audit: type=1305 audit(1696275052.672:162): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:30:52.674465 systemd[1]: Stopped audit-rules.service. Oct 2 19:30:52.678780 kernel: audit: type=1300 audit(1696275052.672:162): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5400870 a2=420 a3=0 items=0 ppid=1 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:52.678847 kernel: audit: type=1327 audit(1696275052.672:162): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:30:52.672000 audit[1242]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5400870 a2=420 a3=0 items=0 ppid=1 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:52.672000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:30:52.675812 systemd[1]: Starting audit-rules.service... Oct 2 19:30:52.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.680872 kernel: audit: type=1131 audit(1696275052.673:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.693656 augenrules[1259]: No rules Oct 2 19:30:52.695346 systemd[1]: Finished audit-rules.service. Oct 2 19:30:52.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.696792 sudo[1238]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:52.695000 audit[1238]: USER_END pid=1238 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.699976 kernel: audit: type=1130 audit(1696275052.694:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.700058 kernel: audit: type=1106 audit(1696275052.695:165): pid=1238 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.700083 kernel: audit: type=1104 audit(1696275052.695:166): pid=1238 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.695000 audit[1238]: CRED_DISP pid=1238 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.700554 sshd[1235]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.704201 kernel: audit: type=1106 audit(1696275052.700:167): pid=1235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.700000 audit[1235]: USER_END pid=1235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.702782 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:41686.service: Deactivated successfully. Oct 2 19:30:52.703463 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:30:52.706335 kernel: audit: type=1104 audit(1696275052.700:168): pid=1235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.700000 audit[1235]: CRED_DISP pid=1235 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.706839 systemd-logind[1129]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:30:52.708064 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:41698.service. Oct 2 19:30:52.708778 systemd-logind[1129]: Removed session 6. Oct 2 19:30:52.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:41686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.711794 kernel: audit: type=1131 audit(1696275052.701:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:41686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:41698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.740000 audit[1265]: USER_ACCT pid=1265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.741661 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 41698 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.741000 audit[1265]: CRED_ACQ pid=1265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.741000 audit[1265]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe334b9a0 a2=3 a3=1 items=0 ppid=1 pid=1265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:52.741000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:30:52.742815 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.746102 systemd-logind[1129]: New session 7 of user core. Oct 2 19:30:52.746921 systemd[1]: Started session-7.scope. Oct 2 19:30:52.749000 audit[1265]: USER_START pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.751000 audit[1267]: CRED_ACQ pid=1267 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.800257 sudo[1268]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:30:52.800458 sudo[1268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.799000 audit[1268]: USER_ACCT pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.799000 audit[1268]: CRED_REFR pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.801000 audit[1268]: USER_START pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.323832 systemd[1]: Reloading. Oct 2 19:30:53.377030 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2023-10-02T19:30:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:30:53.377061 /usr/lib/systemd/system-generators/torcx-generator[1298]: time="2023-10-02T19:30:53Z" level=info msg="torcx already run" Oct 2 19:30:53.437256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:30:53.437275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:30:53.453872 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.496000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit: BPF prog-id=34 op=LOAD Oct 2 19:30:53.497000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit: BPF prog-id=35 op=LOAD Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit: BPF prog-id=36 op=LOAD Oct 2 19:30:53.497000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:30:53.497000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit: BPF prog-id=37 op=LOAD Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.497000 audit: BPF prog-id=38 op=LOAD Oct 2 19:30:53.497000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:30:53.497000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.498000 audit: BPF prog-id=39 op=LOAD Oct 2 19:30:53.498000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.499000 audit: BPF prog-id=40 op=LOAD Oct 2 19:30:53.499000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit: BPF prog-id=41 op=LOAD Oct 2 19:30:53.501000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit: BPF prog-id=42 op=LOAD Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.501000 audit: BPF prog-id=43 op=LOAD Oct 2 19:30:53.501000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:30:53.501000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit: BPF prog-id=44 op=LOAD Oct 2 19:30:53.502000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.504000 audit: BPF prog-id=45 op=LOAD Oct 2 19:30:53.504000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit: BPF prog-id=46 op=LOAD Oct 2 19:30:53.505000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit: BPF prog-id=47 op=LOAD Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.505000 audit: BPF prog-id=48 op=LOAD Oct 2 19:30:53.505000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:30:53.505000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:30:53.511513 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:30:54.169693 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:30:54.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.170304 systemd[1]: Reached target network-online.target. Oct 2 19:30:54.171883 systemd[1]: Started kubelet.service. Oct 2 19:30:54.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.182657 systemd[1]: Starting coreos-metadata.service... Oct 2 19:30:54.190261 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:30:54.190537 systemd[1]: Finished coreos-metadata.service. Oct 2 19:30:54.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.298612 kubelet[1336]: E1002 19:30:54.298534 1336 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:30:54.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:30:54.300987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:30:54.301131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:30:54.484672 systemd[1]: Stopped kubelet.service. Oct 2 19:30:54.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.500179 systemd[1]: Reloading. Oct 2 19:30:54.549415 /usr/lib/systemd/system-generators/torcx-generator[1403]: time="2023-10-02T19:30:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:30:54.549759 /usr/lib/systemd/system-generators/torcx-generator[1403]: time="2023-10-02T19:30:54Z" level=info msg="torcx already run" Oct 2 19:30:54.610729 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:30:54.610747 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:30:54.628718 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.672000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit: BPF prog-id=49 op=LOAD Oct 2 19:30:54.673000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit: BPF prog-id=50 op=LOAD Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit: BPF prog-id=51 op=LOAD Oct 2 19:30:54.673000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:30:54.673000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit: BPF prog-id=52 op=LOAD Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.673000 audit: BPF prog-id=53 op=LOAD Oct 2 19:30:54.673000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:30:54.673000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.674000 audit: BPF prog-id=54 op=LOAD Oct 2 19:30:54.674000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.675000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.676000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.676000 audit: BPF prog-id=55 op=LOAD Oct 2 19:30:54.676000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit: BPF prog-id=56 op=LOAD Oct 2 19:30:54.677000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit: BPF prog-id=57 op=LOAD Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.677000 audit: BPF prog-id=58 op=LOAD Oct 2 19:30:54.677000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:30:54.677000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.678000 audit: BPF prog-id=59 op=LOAD Oct 2 19:30:54.678000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.679000 audit: BPF prog-id=60 op=LOAD Oct 2 19:30:54.679000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit: BPF prog-id=61 op=LOAD Oct 2 19:30:54.680000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit: BPF prog-id=62 op=LOAD Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.680000 audit: BPF prog-id=63 op=LOAD Oct 2 19:30:54.680000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:30:54.680000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:30:54.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.692104 systemd[1]: Started kubelet.service. Oct 2 19:30:54.736251 kubelet[1440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:30:54.736251 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:30:54.736251 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:30:54.736547 kubelet[1440]: I1002 19:30:54.736269 1440 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:30:55.695998 kubelet[1440]: I1002 19:30:55.695947 1440 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Oct 2 19:30:55.695998 kubelet[1440]: I1002 19:30:55.695992 1440 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:30:55.696253 kubelet[1440]: I1002 19:30:55.696232 1440 server.go:837] "Client rotation is on, will bootstrap in background" Oct 2 19:30:55.699913 kubelet[1440]: I1002 19:30:55.699886 1440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:30:55.701942 kubelet[1440]: W1002 19:30:55.701925 1440 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:30:55.702675 kubelet[1440]: I1002 19:30:55.702634 1440 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:30:55.702893 kubelet[1440]: I1002 19:30:55.702883 1440 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:30:55.702956 kubelet[1440]: I1002 19:30:55.702947 1440 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:30:55.703030 kubelet[1440]: I1002 19:30:55.702971 1440 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:30:55.703030 kubelet[1440]: I1002 19:30:55.702982 1440 container_manager_linux.go:302] "Creating device plugin manager" Oct 2 19:30:55.703109 kubelet[1440]: I1002 19:30:55.703077 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:30:55.708148 kubelet[1440]: I1002 19:30:55.708127 1440 kubelet.go:405] "Attempting to sync node with API server" Oct 2 19:30:55.708148 kubelet[1440]: I1002 19:30:55.708153 1440 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:30:55.708324 kubelet[1440]: I1002 19:30:55.708188 1440 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:30:55.708324 kubelet[1440]: I1002 19:30:55.708198 1440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:30:55.708477 kubelet[1440]: E1002 19:30:55.708457 1440 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.708540 kubelet[1440]: E1002 19:30:55.708471 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.709691 kubelet[1440]: I1002 19:30:55.709651 1440 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:30:55.710264 kubelet[1440]: W1002 19:30:55.710236 1440 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:30:55.711216 kubelet[1440]: I1002 19:30:55.711122 1440 server.go:1168] "Started kubelet" Oct 2 19:30:55.712328 kubelet[1440]: I1002 19:30:55.712314 1440 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:30:55.712933 kubelet[1440]: I1002 19:30:55.712916 1440 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:30:55.711000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:55.711000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:30:55.711000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009574a0 a1=40009430e0 a2=4000957470 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.711000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:30:55.712000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:55.712000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:30:55.712000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000940da0 a1=40009430f8 a2=4000957530 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.712000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:30:55.713441 kubelet[1440]: I1002 19:30:55.713157 1440 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:30:55.713441 kubelet[1440]: I1002 19:30:55.713198 1440 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:30:55.713441 kubelet[1440]: E1002 19:30:55.713204 1440 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:30:55.713441 kubelet[1440]: E1002 19:30:55.713227 1440 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:30:55.713441 kubelet[1440]: I1002 19:30:55.713258 1440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:30:55.714919 kubelet[1440]: I1002 19:30:55.714895 1440 server.go:461] "Adding debug handlers to kubelet server" Oct 2 19:30:55.716811 kubelet[1440]: E1002 19:30:55.716781 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 19:30:55.716913 kubelet[1440]: I1002 19:30:55.716842 1440 volume_manager.go:284] "Starting Kubelet Volume Manager" Oct 2 19:30:55.716992 kubelet[1440]: I1002 19:30:55.716966 1440 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Oct 2 19:30:55.718140 kubelet[1440]: E1002 19:30:55.717996 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d09038f20", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 711063840, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 711063840, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.719128 kubelet[1440]: W1002 19:30:55.719082 1440 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:55.719196 kubelet[1440]: W1002 19:30:55.719164 1440 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:55.719196 kubelet[1440]: E1002 19:30:55.719177 1440 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:55.719261 kubelet[1440]: W1002 19:30:55.719212 1440 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:55.719261 kubelet[1440]: E1002 19:30:55.719223 1440 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:55.719359 kubelet[1440]: E1002 19:30:55.719346 1440 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:55.719815 kubelet[1440]: E1002 19:30:55.719735 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d09246945", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 713216837, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 713216837, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.721904 kubelet[1440]: E1002 19:30:55.721878 1440 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:30:55.737000 audit[1455]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.737000 audit[1455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffffe08950 a2=0 a3=1 items=0 ppid=1440 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:30:55.738000 audit[1458]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.738000 audit[1458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffcc3dfb20 a2=0 a3=1 items=0 ppid=1440 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:30:55.739981 kubelet[1440]: I1002 19:30:55.739956 1440 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:30:55.740363 kubelet[1440]: I1002 19:30:55.740346 1440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:30:55.740440 kubelet[1440]: I1002 19:30:55.740423 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:30:55.740864 kubelet[1440]: E1002 19:30:55.740772 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab00c3f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739145279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739145279, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.742292 kubelet[1440]: E1002 19:30:55.742204 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab03ca9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739157673, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739157673, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.743351 kubelet[1440]: I1002 19:30:55.743329 1440 policy_none.go:49] "None policy: Start" Oct 2 19:30:55.743469 kubelet[1440]: E1002 19:30:55.743403 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab04cbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739161791, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739161791, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.744311 kubelet[1440]: I1002 19:30:55.744295 1440 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:30:55.744433 kubelet[1440]: I1002 19:30:55.744420 1440 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:30:55.749998 systemd[1]: Created slice kubepods.slice. Oct 2 19:30:55.753601 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:30:55.756135 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:30:55.740000 audit[1460]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.740000 audit[1460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdadac330 a2=0 a3=1 items=0 ppid=1440 pid=1460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:30:55.759000 audit[1465]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.759000 audit[1465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe8d60c30 a2=0 a3=1 items=0 ppid=1440 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:30:55.762738 kubelet[1440]: I1002 19:30:55.762678 1440 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:30:55.761000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:55.761000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:30:55.761000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40002d7560 a1=40002e2648 a2=40002d7530 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.761000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:30:55.762928 kubelet[1440]: I1002 19:30:55.762746 1440 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:30:55.763236 kubelet[1440]: I1002 19:30:55.763213 1440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:30:55.764666 kubelet[1440]: E1002 19:30:55.764625 1440 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.13\" not found" Oct 2 19:30:55.769418 kubelet[1440]: E1002 19:30:55.769315 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0c3446bf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 764588223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 764588223, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.796000 audit[1471]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.796000 audit[1471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd678e4e0 a2=0 a3=1 items=0 ppid=1440 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:30:55.798104 kubelet[1440]: I1002 19:30:55.798081 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:30:55.797000 audit[1472]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.797000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffee027be0 a2=0 a3=1 items=0 ppid=1440 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:30:55.799244 kubelet[1440]: I1002 19:30:55.799229 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:30:55.799357 kubelet[1440]: I1002 19:30:55.799346 1440 status_manager.go:207] "Starting to sync pod status with apiserver" Oct 2 19:30:55.798000 audit[1473]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.798000 audit[1473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3c2d4b0 a2=0 a3=1 items=0 ppid=1440 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:30:55.799728 kubelet[1440]: I1002 19:30:55.799712 1440 kubelet.go:2257] "Starting kubelet main sync loop" Oct 2 19:30:55.799824 kubelet[1440]: E1002 19:30:55.799815 1440 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:30:55.800000 audit[1475]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.800000 audit[1475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd3a55a90 a2=0 a3=1 items=0 ppid=1440 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:30:55.801424 kubelet[1440]: W1002 19:30:55.801030 1440 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:55.801424 kubelet[1440]: E1002 19:30:55.801060 1440 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:55.800000 audit[1474]: NETFILTER_CFG table=mangle:10 family=10 entries=1 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.800000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd0c4f00 a2=0 a3=1 items=0 ppid=1440 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:30:55.801000 audit[1476]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.801000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3adbad0 a2=0 a3=1 items=0 ppid=1440 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:30:55.802000 audit[1477]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=1477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.802000 audit[1477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc6c08670 a2=0 a3=1 items=0 ppid=1440 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:30:55.804000 audit[1478]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.804000 audit[1478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff5867160 a2=0 a3=1 items=0 ppid=1440 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:30:55.818385 kubelet[1440]: I1002 19:30:55.818359 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:30:55.820176 kubelet[1440]: E1002 19:30:55.820154 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:30:55.820277 kubelet[1440]: E1002 19:30:55.820101 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab00c3f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739145279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 818313822, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab00c3f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.821604 kubelet[1440]: E1002 19:30:55.821542 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab03ca9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739157673, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 818326837, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab03ca9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.822653 kubelet[1440]: E1002 19:30:55.822593 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab04cbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739161791, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 818329868, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab04cbf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.923546 kubelet[1440]: E1002 19:30:55.923513 1440 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:30:56.021457 kubelet[1440]: I1002 19:30:56.021366 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:30:56.024821 kubelet[1440]: E1002 19:30:56.024738 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab00c3f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739145279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 21331728, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab00c3f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.025204 kubelet[1440]: E1002 19:30:56.025157 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:30:56.027721 kubelet[1440]: E1002 19:30:56.027646 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab03ca9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739157673, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 21336915, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab03ca9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.028657 kubelet[1440]: E1002 19:30:56.028592 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab04cbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739161791, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 21340073, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab04cbf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.325028 kubelet[1440]: E1002 19:30:56.324919 1440 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:30:56.426966 kubelet[1440]: I1002 19:30:56.426925 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:30:56.428735 kubelet[1440]: E1002 19:30:56.428704 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 19:30:56.428829 kubelet[1440]: E1002 19:30:56.428699 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab00c3f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739145279, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 426889363, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab00c3f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.429877 kubelet[1440]: E1002 19:30:56.429809 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab03ca9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739157673, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 426894823, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab03ca9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.430944 kubelet[1440]: E1002 19:30:56.430871 1440 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a612d0ab04cbf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 739161791, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 426897474, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a612d0ab04cbf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.690393 kubelet[1440]: W1002 19:30:56.690287 1440 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:56.690393 kubelet[1440]: E1002 19:30:56.690323 1440 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:56.698357 kubelet[1440]: I1002 19:30:56.698312 1440 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:30:56.708686 kubelet[1440]: E1002 19:30:56.708639 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:57.087492 kubelet[1440]: E1002 19:30:57.087378 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.13" not found Oct 2 19:30:57.129884 kubelet[1440]: E1002 19:30:57.129829 1440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.13\" not found" node="10.0.0.13" Oct 2 19:30:57.230351 kubelet[1440]: I1002 19:30:57.230318 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 19:30:57.234929 kubelet[1440]: I1002 19:30:57.234898 1440 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.13" Oct 2 19:30:57.243770 kubelet[1440]: I1002 19:30:57.243054 1440 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:30:57.243770 kubelet[1440]: I1002 19:30:57.243540 1440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:30:57.243927 env[1141]: time="2023-10-02T19:30:57.243356459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:30:57.708213 sudo[1268]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:57.708674 kubelet[1440]: I1002 19:30:57.708652 1440 apiserver.go:52] "Watching apiserver" Oct 2 19:30:57.707000 audit[1268]: USER_END pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:57.709155 kubelet[1440]: E1002 19:30:57.708768 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:57.709387 kernel: kauditd_printk_skb: 411 callbacks suppressed Oct 2 19:30:57.709420 kernel: audit: type=1106 audit(1696275057.707:546): pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:57.710704 sshd[1265]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:57.707000 audit[1268]: CRED_DISP pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:57.713078 kubelet[1440]: I1002 19:30:57.712963 1440 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:30:57.713310 kubelet[1440]: I1002 19:30:57.713293 1440 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:30:57.715645 kernel: audit: type=1104 audit(1696275057.707:547): pid=1268 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:57.715000 audit[1265]: USER_END pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:57.718463 kubelet[1440]: I1002 19:30:57.718439 1440 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Oct 2 19:30:57.718532 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:41698.service: Deactivated successfully. Oct 2 19:30:57.719314 kernel: audit: type=1106 audit(1696275057.715:548): pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:57.719385 kernel: audit: type=1104 audit(1696275057.715:549): pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:57.715000 audit[1265]: CRED_DISP pid=1265 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:57.719568 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:30:57.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:41698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:57.721811 systemd[1]: Created slice kubepods-besteffort-pod3e93d5fb_2c96_4310_957d_07dcb9374c18.slice. Oct 2 19:30:57.724080 kernel: audit: type=1131 audit(1696275057.715:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:41698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:57.724648 systemd-logind[1129]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:30:57.726033 kubelet[1440]: I1002 19:30:57.725998 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e93d5fb-2c96-4310-957d-07dcb9374c18-xtables-lock\") pod \"kube-proxy-4p4mq\" (UID: \"3e93d5fb-2c96-4310-957d-07dcb9374c18\") " pod="kube-system/kube-proxy-4p4mq" Oct 2 19:30:57.726108 kubelet[1440]: I1002 19:30:57.726043 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-lib-modules\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726108 kubelet[1440]: I1002 19:30:57.726068 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e85c9b85-6cb8-4246-9860-75a733298aad-clustermesh-secrets\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726108 kubelet[1440]: I1002 19:30:57.726092 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-kernel\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726178 kubelet[1440]: I1002 19:30:57.726120 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrprl\" (UniqueName: \"kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-kube-api-access-zrprl\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726178 kubelet[1440]: I1002 19:30:57.726143 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-cgroup\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726178 kubelet[1440]: I1002 19:30:57.726161 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-etc-cni-netd\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726178 kubelet[1440]: I1002 19:30:57.726179 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e93d5fb-2c96-4310-957d-07dcb9374c18-lib-modules\") pod \"kube-proxy-4p4mq\" (UID: \"3e93d5fb-2c96-4310-957d-07dcb9374c18\") " pod="kube-system/kube-proxy-4p4mq" Oct 2 19:30:57.726269 kubelet[1440]: I1002 19:30:57.726205 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-config-path\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726269 kubelet[1440]: I1002 19:30:57.726228 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-bpf-maps\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726269 kubelet[1440]: I1002 19:30:57.726249 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-hostproc\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726269 kubelet[1440]: I1002 19:30:57.726267 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cni-path\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726348 kubelet[1440]: I1002 19:30:57.726288 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-xtables-lock\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726348 kubelet[1440]: I1002 19:30:57.726307 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9q26\" (UniqueName: \"kubernetes.io/projected/3e93d5fb-2c96-4310-957d-07dcb9374c18-kube-api-access-w9q26\") pod \"kube-proxy-4p4mq\" (UID: \"3e93d5fb-2c96-4310-957d-07dcb9374c18\") " pod="kube-system/kube-proxy-4p4mq" Oct 2 19:30:57.726348 kubelet[1440]: I1002 19:30:57.726329 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-run\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726348 kubelet[1440]: I1002 19:30:57.726347 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-net\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726432 kubelet[1440]: I1002 19:30:57.726377 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-hubble-tls\") pod \"cilium-nmrtg\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " pod="kube-system/cilium-nmrtg" Oct 2 19:30:57.726432 kubelet[1440]: I1002 19:30:57.726416 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e93d5fb-2c96-4310-957d-07dcb9374c18-kube-proxy\") pod \"kube-proxy-4p4mq\" (UID: \"3e93d5fb-2c96-4310-957d-07dcb9374c18\") " pod="kube-system/kube-proxy-4p4mq" Oct 2 19:30:57.726432 kubelet[1440]: I1002 19:30:57.726423 1440 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:30:57.737139 systemd[1]: Created slice kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice. Oct 2 19:30:57.737512 systemd-logind[1129]: Removed session 7. Oct 2 19:30:58.039845 kubelet[1440]: E1002 19:30:58.038923 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:30:58.042048 env[1141]: time="2023-10-02T19:30:58.041771512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4p4mq,Uid:3e93d5fb-2c96-4310-957d-07dcb9374c18,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:58.061980 kubelet[1440]: E1002 19:30:58.061943 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:30:58.062792 env[1141]: time="2023-10-02T19:30:58.062736534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nmrtg,Uid:e85c9b85-6cb8-4246-9860-75a733298aad,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:58.710253 kubelet[1440]: E1002 19:30:58.710208 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.830998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938504186.mount: Deactivated successfully. Oct 2 19:30:58.844270 env[1141]: time="2023-10-02T19:30:58.844170441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.846472 env[1141]: time="2023-10-02T19:30:58.846438352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.847377 env[1141]: time="2023-10-02T19:30:58.847344120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.849422 env[1141]: time="2023-10-02T19:30:58.849395828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.851107 env[1141]: time="2023-10-02T19:30:58.851071506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.859825 env[1141]: time="2023-10-02T19:30:58.859792423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.862533 env[1141]: time="2023-10-02T19:30:58.862501409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.864300 env[1141]: time="2023-10-02T19:30:58.864239817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:58.908649 env[1141]: time="2023-10-02T19:30:58.908565862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:58.908649 env[1141]: time="2023-10-02T19:30:58.908612782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:58.908795 env[1141]: time="2023-10-02T19:30:58.908623374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:58.908864 env[1141]: time="2023-10-02T19:30:58.908826709Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efdf5553666931276c842a9dcc40ef0f9653247faf01f83e3707bbc0c139491b pid=1501 runtime=io.containerd.runc.v2 Oct 2 19:30:58.909009 env[1141]: time="2023-10-02T19:30:58.908954367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:58.909145 env[1141]: time="2023-10-02T19:30:58.909109055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:58.909244 env[1141]: time="2023-10-02T19:30:58.909216039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:58.909500 env[1141]: time="2023-10-02T19:30:58.909457741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe pid=1502 runtime=io.containerd.runc.v2 Oct 2 19:30:58.933825 systemd[1]: Started cri-containerd-efdf5553666931276c842a9dcc40ef0f9653247faf01f83e3707bbc0c139491b.scope. Oct 2 19:30:58.937698 systemd[1]: Started cri-containerd-0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe.scope. Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966838 kernel: audit: type=1400 audit(1696275058.962:551): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966889 kernel: audit: type=1400 audit(1696275058.962:552): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969407 kernel: audit: type=1400 audit(1696275058.962:553): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969469 kernel: audit: type=1400 audit(1696275058.962:554): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971087 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.962000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit: BPF prog-id=64 op=LOAD Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1501 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566646635353533363636393331323736633834326139646363343065 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1501 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566646635353533363636393331323736633834326139646363343065 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.965000 audit: BPF prog-id=65 op=LOAD Oct 2 19:30:58.965000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1501 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566646635353533363636393331323736633834326139646363343065 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.966000 audit: BPF prog-id=66 op=LOAD Oct 2 19:30:58.966000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1501 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566646635353533363636393331323736633834326139646363343065 Oct 2 19:30:58.966000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:30:58.966000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { perfmon } for pid=1519 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit[1519]: AVC avc: denied { bpf } for pid=1519 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.968000 audit: BPF prog-id=67 op=LOAD Oct 2 19:30:58.968000 audit[1519]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1501 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566646635353533363636393331323736633834326139646363343065 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.970000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.970000 audit: BPF prog-id=68 op=LOAD Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1502 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303837613031313532663835303536666435633032366261643537 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1502 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303837613031313532663835303536666435633032366261643537 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit: BPF prog-id=69 op=LOAD Oct 2 19:30:58.971000 audit[1522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1502 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303837613031313532663835303536666435633032366261643537 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit: BPF prog-id=70 op=LOAD Oct 2 19:30:58.971000 audit[1522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1502 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303837613031313532663835303536666435633032366261643537 Oct 2 19:30:58.971000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:30:58.971000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:58.971000 audit: BPF prog-id=71 op=LOAD Oct 2 19:30:58.971000 audit[1522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1502 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:58.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303837613031313532663835303536666435633032366261643537 Oct 2 19:30:58.986553 env[1141]: time="2023-10-02T19:30:58.986512763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4p4mq,Uid:3e93d5fb-2c96-4310-957d-07dcb9374c18,Namespace:kube-system,Attempt:0,} returns sandbox id \"efdf5553666931276c842a9dcc40ef0f9653247faf01f83e3707bbc0c139491b\"" Oct 2 19:30:58.987851 kubelet[1440]: E1002 19:30:58.987805 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:30:58.988844 env[1141]: time="2023-10-02T19:30:58.988801937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nmrtg,Uid:e85c9b85-6cb8-4246-9860-75a733298aad,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\"" Oct 2 19:30:58.988963 env[1141]: time="2023-10-02T19:30:58.988928889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\"" Oct 2 19:30:58.989757 kubelet[1440]: E1002 19:30:58.989673 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:30:59.710933 kubelet[1440]: E1002 19:30:59.710873 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:00.711839 kubelet[1440]: E1002 19:31:00.711782 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:00.851050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181428419.mount: Deactivated successfully. Oct 2 19:31:01.532467 env[1141]: time="2023-10-02T19:31:01.532370635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:01.533634 env[1141]: time="2023-10-02T19:31:01.533596326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95195e68173b6cfcdd3125d7bbffa6759189df53b60ffe7a72256059cd5dd7af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:01.534217 env[1141]: time="2023-10-02T19:31:01.534191600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:01.536034 env[1141]: time="2023-10-02T19:31:01.535993019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8e9eff2f6d0b398f9ac5f5a15c1cb7d5f468f28d64a78d593d57f72a969a54ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:01.536572 env[1141]: time="2023-10-02T19:31:01.536545018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.6\" returns image reference \"sha256:95195e68173b6cfcdd3125d7bbffa6759189df53b60ffe7a72256059cd5dd7af\"" Oct 2 19:31:01.538045 env[1141]: time="2023-10-02T19:31:01.538010856Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:31:01.543906 env[1141]: time="2023-10-02T19:31:01.543861451Z" level=info msg="CreateContainer within sandbox \"efdf5553666931276c842a9dcc40ef0f9653247faf01f83e3707bbc0c139491b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:31:01.556260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146308703.mount: Deactivated successfully. Oct 2 19:31:01.561462 env[1141]: time="2023-10-02T19:31:01.561413081Z" level=info msg="CreateContainer within sandbox \"efdf5553666931276c842a9dcc40ef0f9653247faf01f83e3707bbc0c139491b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"222897e9b5237eeec3a043afdc1e1f791a0c9eab1ac52855ac9ed7be4f40207c\"" Oct 2 19:31:01.562499 env[1141]: time="2023-10-02T19:31:01.562466064Z" level=info msg="StartContainer for \"222897e9b5237eeec3a043afdc1e1f791a0c9eab1ac52855ac9ed7be4f40207c\"" Oct 2 19:31:01.582188 systemd[1]: Started cri-containerd-222897e9b5237eeec3a043afdc1e1f791a0c9eab1ac52855ac9ed7be4f40207c.scope. Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1501 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323839376539623532333765656563336130343361666463316531 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit: BPF prog-id=72 op=LOAD Oct 2 19:31:01.608000 audit[1578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1501 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323839376539623532333765656563336130343361666463316531 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit: BPF prog-id=73 op=LOAD Oct 2 19:31:01.608000 audit[1578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1501 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323839376539623532333765656563336130343361666463316531 Oct 2 19:31:01.608000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:31:01.608000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { perfmon } for pid=1578 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit[1578]: AVC avc: denied { bpf } for pid=1578 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:01.608000 audit: BPF prog-id=74 op=LOAD Oct 2 19:31:01.608000 audit[1578]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1501 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323839376539623532333765656563336130343361666463316531 Oct 2 19:31:01.625476 env[1141]: time="2023-10-02T19:31:01.625430183Z" level=info msg="StartContainer for \"222897e9b5237eeec3a043afdc1e1f791a0c9eab1ac52855ac9ed7be4f40207c\" returns successfully" Oct 2 19:31:01.713113 kubelet[1440]: E1002 19:31:01.713042 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.747000 audit[1630]: NETFILTER_CFG table=mangle:14 family=10 entries=1 op=nft_register_chain pid=1630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.747000 audit[1630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda337290 a2=0 a3=ffff9a3926c0 items=0 ppid=1589 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:31:01.748000 audit[1629]: NETFILTER_CFG table=mangle:15 family=2 entries=1 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.748000 audit[1629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2746090 a2=0 a3=ffffae8556c0 items=0 ppid=1589 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:31:01.748000 audit[1631]: NETFILTER_CFG table=nat:16 family=10 entries=1 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.748000 audit[1631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe331a810 a2=0 a3=ffff8bee26c0 items=0 ppid=1589 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:31:01.750000 audit[1633]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.750000 audit[1633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc87d430 a2=0 a3=ffffb3cb36c0 items=0 ppid=1589 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.750000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:31:01.750000 audit[1632]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_chain pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.750000 audit[1632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcb8a2f0 a2=0 a3=ffff982d86c0 items=0 ppid=1589 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:31:01.752000 audit[1634]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_chain pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.752000 audit[1634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2f76ee0 a2=0 a3=ffff88bcd6c0 items=0 ppid=1589 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:31:01.812391 kubelet[1440]: E1002 19:31:01.812013 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:01.850000 audit[1635]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.850000 audit[1635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd7586cb0 a2=0 a3=ffffa25d76c0 items=0 ppid=1589 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:31:01.852000 audit[1637]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.852000 audit[1637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffaed0fa0 a2=0 a3=ffffb06a26c0 items=0 ppid=1589 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:31:01.856000 audit[1640]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.856000 audit[1640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffb82f820 a2=0 a3=ffffb391c6c0 items=0 ppid=1589 pid=1640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:31:01.858000 audit[1641]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.858000 audit[1641]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1cad520 a2=0 a3=ffffb45286c0 items=0 ppid=1589 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:31:01.862000 audit[1643]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.862000 audit[1643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe837f960 a2=0 a3=ffffa10346c0 items=0 ppid=1589 pid=1643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:31:01.863000 audit[1644]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.863000 audit[1644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc41da20 a2=0 a3=ffffb0ec86c0 items=0 ppid=1589 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:31:01.865000 audit[1646]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.865000 audit[1646]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd486b9f0 a2=0 a3=ffff997f86c0 items=0 ppid=1589 pid=1646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:31:01.870000 audit[1649]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.870000 audit[1649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd3b1b320 a2=0 a3=ffff8402c6c0 items=0 ppid=1589 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:31:01.872000 audit[1650]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.872000 audit[1650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6678840 a2=0 a3=ffff7fc6b6c0 items=0 ppid=1589 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:31:01.874000 audit[1652]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.874000 audit[1652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8f22e90 a2=0 a3=ffffa90896c0 items=0 ppid=1589 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:31:01.876000 audit[1653]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.876000 audit[1653]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9b45dc0 a2=0 a3=ffff853a66c0 items=0 ppid=1589 pid=1653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:31:01.878000 audit[1655]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1655 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.878000 audit[1655]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc15f3cb0 a2=0 a3=ffffbe34a6c0 items=0 ppid=1589 pid=1655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:01.882000 audit[1658]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1658 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.882000 audit[1658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdeea7490 a2=0 a3=ffff8a7586c0 items=0 ppid=1589 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:01.886000 audit[1661]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.886000 audit[1661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeff69fb0 a2=0 a3=ffffa2d146c0 items=0 ppid=1589 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:31:01.888000 audit[1662]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.888000 audit[1662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdf18b8c0 a2=0 a3=ffff97a0e6c0 items=0 ppid=1589 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:31:01.890000 audit[1664]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.890000 audit[1664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc1b12460 a2=0 a3=ffff8b0746c0 items=0 ppid=1589 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:01.912000 audit[1669]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.912000 audit[1669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffdfc4fe30 a2=0 a3=ffffb17d86c0 items=0 ppid=1589 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:01.918000 audit[1674]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.918000 audit[1674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe8f7530 a2=0 a3=ffffbe9416c0 items=0 ppid=1589 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:31:01.921000 audit[1676]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:01.921000 audit[1676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdc119e20 a2=0 a3=ffff8a0a16c0 items=0 ppid=1589 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:31:01.932000 audit[1678]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:31:01.932000 audit[1678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4956 a0=3 a1=fffffc936590 a2=0 a3=ffffa3dd66c0 items=0 ppid=1589 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:01.944000 audit[1678]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:31:01.944000 audit[1678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffffc936590 a2=0 a3=ffffa3dd66c0 items=0 ppid=1589 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:01.945000 audit[1684]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.945000 audit[1684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd69e3d50 a2=0 a3=ffff9864d6c0 items=0 ppid=1589 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:31:01.948000 audit[1686]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=1686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.948000 audit[1686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe8c21c20 a2=0 a3=ffffa5ca06c0 items=0 ppid=1589 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:31:01.953000 audit[1689]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=1689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.953000 audit[1689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffde838670 a2=0 a3=ffff9303d6c0 items=0 ppid=1589 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:31:01.954000 audit[1690]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.954000 audit[1690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff44a7bb0 a2=0 a3=ffff98e936c0 items=0 ppid=1589 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.954000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:31:01.957000 audit[1692]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=1692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.957000 audit[1692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcecbbfa0 a2=0 a3=ffffb779b6c0 items=0 ppid=1589 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:31:01.958000 audit[1693]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1693 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.958000 audit[1693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5a109f0 a2=0 a3=ffff9191c6c0 items=0 ppid=1589 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:31:01.961000 audit[1695]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.961000 audit[1695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe5cdf1f0 a2=0 a3=ffffa6c556c0 items=0 ppid=1589 pid=1695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:31:01.967000 audit[1698]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=1698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.967000 audit[1698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc98220f0 a2=0 a3=ffff81f196c0 items=0 ppid=1589 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:31:01.969000 audit[1699]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=1699 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.969000 audit[1699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce4b3fa0 a2=0 a3=ffffb61cf6c0 items=0 ppid=1589 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:31:01.972000 audit[1701]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=1701 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.972000 audit[1701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdb380610 a2=0 a3=ffff819246c0 items=0 ppid=1589 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:31:01.974000 audit[1702]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1702 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.974000 audit[1702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca06fec0 a2=0 a3=ffff878fb6c0 items=0 ppid=1589 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:31:01.977000 audit[1704]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.977000 audit[1704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea32ec00 a2=0 a3=ffff8ec8d6c0 items=0 ppid=1589 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:01.981000 audit[1707]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=1707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.981000 audit[1707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe669e600 a2=0 a3=ffff809ff6c0 items=0 ppid=1589 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:31:01.985000 audit[1710]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1710 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.985000 audit[1710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5184af0 a2=0 a3=ffff7fa7b6c0 items=0 ppid=1589 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:31:01.986000 audit[1711]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=1711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.986000 audit[1711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeb9e8cb0 a2=0 a3=ffff944ec6c0 items=0 ppid=1589 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:31:01.988000 audit[1713]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=1713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.988000 audit[1713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc89b12b0 a2=0 a3=ffff907c26c0 items=0 ppid=1589 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:01.992000 audit[1716]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=1716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.992000 audit[1716]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff87aafe0 a2=0 a3=ffff82da46c0 items=0 ppid=1589 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:01.993000 audit[1717]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.993000 audit[1717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd12c1b0 a2=0 a3=ffffa0e5e6c0 items=0 ppid=1589 pid=1717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:31:01.996000 audit[1719]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_rule pid=1719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:01.996000 audit[1719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc5a8d930 a2=0 a3=ffff8088c6c0 items=0 ppid=1589 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:01.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:31:02.000000 audit[1722]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:02.000000 audit[1722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff013d0b0 a2=0 a3=ffffae8d86c0 items=0 ppid=1589 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:02.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:31:02.001000 audit[1723]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:02.001000 audit[1723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7743070 a2=0 a3=ffffb0ba36c0 items=0 ppid=1589 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:02.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:31:02.003000 audit[1725]: NETFILTER_CFG table=nat:62 family=10 entries=2 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:02.003000 audit[1725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc835ad50 a2=0 a3=ffff8a16b6c0 items=0 ppid=1589 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:02.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:31:02.007000 audit[1727]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=1727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:31:02.007000 audit[1727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd67fa380 a2=0 a3=ffffb51656c0 items=0 ppid=1589 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:02.007000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:02.007000 audit[1727]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=1727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:31:02.007000 audit[1727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffd67fa380 a2=0 a3=ffffb51656c0 items=0 ppid=1589 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:02.007000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:02.713592 kubelet[1440]: E1002 19:31:02.713545 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:02.817008 kubelet[1440]: E1002 19:31:02.816708 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:03.714494 kubelet[1440]: E1002 19:31:03.714441 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.715021 kubelet[1440]: E1002 19:31:04.714972 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:05.041068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842869714.mount: Deactivated successfully. Oct 2 19:31:05.715689 kubelet[1440]: E1002 19:31:05.715648 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:06.716690 kubelet[1440]: E1002 19:31:06.716646 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.348542 env[1141]: time="2023-10-02T19:31:07.348497243Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:07.349714 env[1141]: time="2023-10-02T19:31:07.349668952Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:07.351840 env[1141]: time="2023-10-02T19:31:07.351804538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:07.352361 env[1141]: time="2023-10-02T19:31:07.352331594Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:31:07.355296 env[1141]: time="2023-10-02T19:31:07.355261681Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:31:07.362697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080768501.mount: Deactivated successfully. Oct 2 19:31:07.365930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189433138.mount: Deactivated successfully. Oct 2 19:31:07.368127 env[1141]: time="2023-10-02T19:31:07.368068516Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\"" Oct 2 19:31:07.368960 env[1141]: time="2023-10-02T19:31:07.368930266Z" level=info msg="StartContainer for \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\"" Oct 2 19:31:07.384622 systemd[1]: Started cri-containerd-831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5.scope. Oct 2 19:31:07.402121 systemd[1]: cri-containerd-831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5.scope: Deactivated successfully. Oct 2 19:31:07.539553 env[1141]: time="2023-10-02T19:31:07.539505038Z" level=info msg="shim disconnected" id=831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5 Oct 2 19:31:07.539807 env[1141]: time="2023-10-02T19:31:07.539787755Z" level=warning msg="cleaning up after shim disconnected" id=831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5 namespace=k8s.io Oct 2 19:31:07.539894 env[1141]: time="2023-10-02T19:31:07.539880138Z" level=info msg="cleaning up dead shim" Oct 2 19:31:07.548867 env[1141]: time="2023-10-02T19:31:07.548822357Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1763 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:07.549346 env[1141]: time="2023-10-02T19:31:07.549246292Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:31:07.551524 env[1141]: time="2023-10-02T19:31:07.549571641Z" level=error msg="Failed to pipe stdout of container \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\"" error="reading from a closed fifo" Oct 2 19:31:07.552077 env[1141]: time="2023-10-02T19:31:07.552034843Z" level=error msg="Failed to pipe stderr of container \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\"" error="reading from a closed fifo" Oct 2 19:31:07.554233 env[1141]: time="2023-10-02T19:31:07.554172854Z" level=error msg="StartContainer for \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:07.554704 kubelet[1440]: E1002 19:31:07.554486 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5" Oct 2 19:31:07.554704 kubelet[1440]: E1002 19:31:07.554617 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:07.554704 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:07.554704 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:07.554911 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zrprl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:07.554986 kubelet[1440]: E1002 19:31:07.554680 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:07.723250 kubelet[1440]: E1002 19:31:07.716932 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.825525 kubelet[1440]: E1002 19:31:07.825497 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:07.828586 env[1141]: time="2023-10-02T19:31:07.828360328Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:31:07.841798 kubelet[1440]: I1002 19:31:07.841763 1440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4p4mq" podStartSLOduration=8.293088509 podCreationTimestamp="2023-10-02 19:30:57 +0000 UTC" firstStartedPulling="2023-10-02 19:30:58.988474005 +0000 UTC m=+4.291928335" lastFinishedPulling="2023-10-02 19:31:01.537108744 +0000 UTC m=+6.840563075" observedRunningTime="2023-10-02 19:31:01.82181539 +0000 UTC m=+7.125269681" watchObservedRunningTime="2023-10-02 19:31:07.841723249 +0000 UTC m=+13.145177580" Oct 2 19:31:07.844407 env[1141]: time="2023-10-02T19:31:07.844260620Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\"" Oct 2 19:31:07.845843 env[1141]: time="2023-10-02T19:31:07.845419603Z" level=info msg="StartContainer for \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\"" Oct 2 19:31:07.876015 systemd[1]: Started cri-containerd-aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b.scope. Oct 2 19:31:07.892380 systemd[1]: cri-containerd-aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b.scope: Deactivated successfully. Oct 2 19:31:07.907486 env[1141]: time="2023-10-02T19:31:07.907433795Z" level=info msg="shim disconnected" id=aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b Oct 2 19:31:07.907701 env[1141]: time="2023-10-02T19:31:07.907682708Z" level=warning msg="cleaning up after shim disconnected" id=aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b namespace=k8s.io Oct 2 19:31:07.907761 env[1141]: time="2023-10-02T19:31:07.907748526Z" level=info msg="cleaning up dead shim" Oct 2 19:31:07.916533 env[1141]: time="2023-10-02T19:31:07.916483668Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1798 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:07.916975 env[1141]: time="2023-10-02T19:31:07.916903627Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:31:07.917201 env[1141]: time="2023-10-02T19:31:07.917142200Z" level=error msg="Failed to pipe stderr of container \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\"" error="reading from a closed fifo" Oct 2 19:31:07.918001 env[1141]: time="2023-10-02T19:31:07.917958376Z" level=error msg="Failed to pipe stdout of container \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\"" error="reading from a closed fifo" Oct 2 19:31:07.923275 env[1141]: time="2023-10-02T19:31:07.923228978Z" level=error msg="StartContainer for \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:07.923626 kubelet[1440]: E1002 19:31:07.923605 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b" Oct 2 19:31:07.923742 kubelet[1440]: E1002 19:31:07.923721 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:07.923742 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:07.923742 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:07.923833 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zrprl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:07.923833 kubelet[1440]: E1002 19:31:07.923759 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:08.361194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5-rootfs.mount: Deactivated successfully. Oct 2 19:31:08.724199 kubelet[1440]: E1002 19:31:08.724076 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:08.829199 kubelet[1440]: I1002 19:31:08.829174 1440 scope.go:115] "RemoveContainer" containerID="831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5" Oct 2 19:31:08.829421 kubelet[1440]: I1002 19:31:08.829408 1440 scope.go:115] "RemoveContainer" containerID="831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5" Oct 2 19:31:08.830927 env[1141]: time="2023-10-02T19:31:08.830893754Z" level=info msg="RemoveContainer for \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\"" Oct 2 19:31:08.831531 env[1141]: time="2023-10-02T19:31:08.831502381Z" level=info msg="RemoveContainer for \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\"" Oct 2 19:31:08.831749 env[1141]: time="2023-10-02T19:31:08.831712115Z" level=error msg="RemoveContainer for \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\" failed" error="failed to set removing state for container \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\": container is already in removing state" Oct 2 19:31:08.832109 kubelet[1440]: E1002 19:31:08.832076 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\": container is already in removing state" containerID="831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5" Oct 2 19:31:08.832198 kubelet[1440]: I1002 19:31:08.832146 1440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5} err="rpc error: code = Unknown desc = failed to set removing state for container \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\": container is already in removing state" Oct 2 19:31:08.834509 env[1141]: time="2023-10-02T19:31:08.834478057Z" level=info msg="RemoveContainer for \"831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5\" returns successfully" Oct 2 19:31:08.834863 kubelet[1440]: E1002 19:31:08.834843 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:08.835222 kubelet[1440]: E1002 19:31:08.835070 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:09.725113 kubelet[1440]: E1002 19:31:09.725034 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.833410 kubelet[1440]: E1002 19:31:09.833378 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:09.833687 kubelet[1440]: E1002 19:31:09.833659 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:10.643900 kubelet[1440]: W1002 19:31:10.643860 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice/cri-containerd-831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5.scope WatchSource:0}: container "831f05bc5b8b006dcc430c270c8aacf828275c7c1fcd06208f31ded56bd167a5" in namespace "k8s.io": not found Oct 2 19:31:10.726049 kubelet[1440]: E1002 19:31:10.725999 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:11.726865 kubelet[1440]: E1002 19:31:11.726835 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:12.727660 kubelet[1440]: E1002 19:31:12.727618 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.728085 kubelet[1440]: E1002 19:31:13.728039 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.750915 kubelet[1440]: W1002 19:31:13.750879 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice/cri-containerd-aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b.scope WatchSource:0}: task aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b not found: not found Oct 2 19:31:14.729077 kubelet[1440]: E1002 19:31:14.729029 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:15.709266 kubelet[1440]: E1002 19:31:15.709226 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:15.729606 kubelet[1440]: E1002 19:31:15.729582 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:16.730475 kubelet[1440]: E1002 19:31:16.730429 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:17.731589 kubelet[1440]: E1002 19:31:17.731552 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:18.732962 kubelet[1440]: E1002 19:31:18.732916 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:19.733481 kubelet[1440]: E1002 19:31:19.733418 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:20.734534 kubelet[1440]: E1002 19:31:20.734500 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.735644 kubelet[1440]: E1002 19:31:21.735607 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.800812 kubelet[1440]: E1002 19:31:21.800781 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:21.803008 env[1141]: time="2023-10-02T19:31:21.802968391Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:31:21.811672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954719183.mount: Deactivated successfully. Oct 2 19:31:21.814794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576433079.mount: Deactivated successfully. Oct 2 19:31:21.817064 env[1141]: time="2023-10-02T19:31:21.817026171Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\"" Oct 2 19:31:21.820296 env[1141]: time="2023-10-02T19:31:21.820260206Z" level=info msg="StartContainer for \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\"" Oct 2 19:31:21.836456 systemd[1]: Started cri-containerd-48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663.scope. Oct 2 19:31:21.867723 systemd[1]: cri-containerd-48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663.scope: Deactivated successfully. Oct 2 19:31:21.881013 env[1141]: time="2023-10-02T19:31:21.880953466Z" level=info msg="shim disconnected" id=48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663 Oct 2 19:31:21.881013 env[1141]: time="2023-10-02T19:31:21.881006618Z" level=warning msg="cleaning up after shim disconnected" id=48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663 namespace=k8s.io Oct 2 19:31:21.881013 env[1141]: time="2023-10-02T19:31:21.881016010Z" level=info msg="cleaning up dead shim" Oct 2 19:31:21.892664 env[1141]: time="2023-10-02T19:31:21.892614064Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1836 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:21.893026 env[1141]: time="2023-10-02T19:31:21.892969267Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:31:21.893302 env[1141]: time="2023-10-02T19:31:21.893234271Z" level=error msg="Failed to pipe stdout of container \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\"" error="reading from a closed fifo" Oct 2 19:31:21.895204 env[1141]: time="2023-10-02T19:31:21.895162031Z" level=error msg="Failed to pipe stderr of container \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\"" error="reading from a closed fifo" Oct 2 19:31:21.896563 env[1141]: time="2023-10-02T19:31:21.896515664Z" level=error msg="StartContainer for \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:21.896933 kubelet[1440]: E1002 19:31:21.896839 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663" Oct 2 19:31:21.897030 kubelet[1440]: E1002 19:31:21.896946 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:21.897030 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:21.897030 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:21.897030 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zrprl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:21.897030 kubelet[1440]: E1002 19:31:21.896981 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:22.742473 kubelet[1440]: E1002 19:31:22.737528 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:22.809240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663-rootfs.mount: Deactivated successfully. Oct 2 19:31:22.860108 kubelet[1440]: I1002 19:31:22.860062 1440 scope.go:115] "RemoveContainer" containerID="aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b" Oct 2 19:31:22.860526 kubelet[1440]: I1002 19:31:22.860495 1440 scope.go:115] "RemoveContainer" containerID="aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b" Oct 2 19:31:22.861786 env[1141]: time="2023-10-02T19:31:22.861713853Z" level=info msg="RemoveContainer for \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\"" Oct 2 19:31:22.863680 env[1141]: time="2023-10-02T19:31:22.863646625Z" level=info msg="RemoveContainer for \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\"" Oct 2 19:31:22.863948 env[1141]: time="2023-10-02T19:31:22.863904394Z" level=error msg="RemoveContainer for \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\" failed" error="failed to set removing state for container \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\": container is already in removing state" Oct 2 19:31:22.864236 kubelet[1440]: E1002 19:31:22.864201 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\": container is already in removing state" containerID="aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b" Oct 2 19:31:22.864310 kubelet[1440]: E1002 19:31:22.864246 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b": container is already in removing state; Skipping pod "cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)" Oct 2 19:31:22.864360 kubelet[1440]: E1002 19:31:22.864326 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:22.864591 kubelet[1440]: E1002 19:31:22.864567 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:22.903720 env[1141]: time="2023-10-02T19:31:22.903653104Z" level=info msg="RemoveContainer for \"aaf9e84982f8ed02897b1a6bfffb74a7650cbfc4bac904098945fde4f6a8dc9b\" returns successfully" Oct 2 19:31:23.738638 kubelet[1440]: E1002 19:31:23.738586 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.738923 kubelet[1440]: E1002 19:31:24.738890 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.983570 kubelet[1440]: W1002 19:31:24.983532 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice/cri-containerd-48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663.scope WatchSource:0}: task 48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663 not found: not found Oct 2 19:31:25.740174 kubelet[1440]: E1002 19:31:25.740144 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:26.741396 kubelet[1440]: E1002 19:31:26.741322 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.741718 kubelet[1440]: E1002 19:31:27.741658 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:28.742227 kubelet[1440]: E1002 19:31:28.742181 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.742364 kubelet[1440]: E1002 19:31:29.742310 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.742947 kubelet[1440]: E1002 19:31:30.742892 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.743202 kubelet[1440]: E1002 19:31:31.743170 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:32.744196 kubelet[1440]: E1002 19:31:32.744078 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.744301 kubelet[1440]: E1002 19:31:33.744241 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.744897 kubelet[1440]: E1002 19:31:34.744857 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.709083 kubelet[1440]: E1002 19:31:35.709046 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.745342 kubelet[1440]: E1002 19:31:35.745298 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.967930 update_engine[1131]: I1002 19:31:35.967792 1131 update_attempter.cc:505] Updating boot flags... Oct 2 19:31:36.745735 kubelet[1440]: E1002 19:31:36.745701 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:36.800746 kubelet[1440]: E1002 19:31:36.800711 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:36.801081 kubelet[1440]: E1002 19:31:36.801056 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:37.746223 kubelet[1440]: E1002 19:31:37.746161 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.748346 kubelet[1440]: E1002 19:31:38.748241 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.748572 kubelet[1440]: E1002 19:31:39.748540 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:40.749485 kubelet[1440]: E1002 19:31:40.749429 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:41.750061 kubelet[1440]: E1002 19:31:41.750007 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.750953 kubelet[1440]: E1002 19:31:42.750922 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:43.751979 kubelet[1440]: E1002 19:31:43.751943 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.753220 kubelet[1440]: E1002 19:31:44.753176 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.753807 kubelet[1440]: E1002 19:31:45.753772 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:46.755124 kubelet[1440]: E1002 19:31:46.755066 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.756281 kubelet[1440]: E1002 19:31:47.756217 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.757543 kubelet[1440]: E1002 19:31:48.757507 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.801294 kubelet[1440]: E1002 19:31:48.801268 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:48.803369 env[1141]: time="2023-10-02T19:31:48.803302647Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:31:48.814853 env[1141]: time="2023-10-02T19:31:48.814793203Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\"" Oct 2 19:31:48.815364 env[1141]: time="2023-10-02T19:31:48.815337622Z" level=info msg="StartContainer for \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\"" Oct 2 19:31:48.833320 systemd[1]: Started cri-containerd-8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22.scope. Oct 2 19:31:48.858073 systemd[1]: cri-containerd-8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22.scope: Deactivated successfully. Oct 2 19:31:48.861813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22-rootfs.mount: Deactivated successfully. Oct 2 19:31:48.865584 env[1141]: time="2023-10-02T19:31:48.865533943Z" level=info msg="shim disconnected" id=8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22 Oct 2 19:31:48.865701 env[1141]: time="2023-10-02T19:31:48.865589749Z" level=warning msg="cleaning up after shim disconnected" id=8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22 namespace=k8s.io Oct 2 19:31:48.865701 env[1141]: time="2023-10-02T19:31:48.865599870Z" level=info msg="cleaning up dead shim" Oct 2 19:31:48.873824 env[1141]: time="2023-10-02T19:31:48.873774790Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1888 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:48.874081 env[1141]: time="2023-10-02T19:31:48.874026817Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:31:48.874269 env[1141]: time="2023-10-02T19:31:48.874227919Z" level=error msg="Failed to pipe stdout of container \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\"" error="reading from a closed fifo" Oct 2 19:31:48.874311 env[1141]: time="2023-10-02T19:31:48.874262362Z" level=error msg="Failed to pipe stderr of container \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\"" error="reading from a closed fifo" Oct 2 19:31:48.875654 env[1141]: time="2023-10-02T19:31:48.875594506Z" level=error msg="StartContainer for \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:48.875877 kubelet[1440]: E1002 19:31:48.875853 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22" Oct 2 19:31:48.876005 kubelet[1440]: E1002 19:31:48.875979 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:48.876005 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:48.876005 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:48.876005 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zrprl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:48.876168 kubelet[1440]: E1002 19:31:48.876022 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:48.914774 kubelet[1440]: I1002 19:31:48.914746 1440 scope.go:115] "RemoveContainer" containerID="48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663" Oct 2 19:31:48.915079 kubelet[1440]: I1002 19:31:48.915065 1440 scope.go:115] "RemoveContainer" containerID="48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663" Oct 2 19:31:48.916284 env[1141]: time="2023-10-02T19:31:48.916235519Z" level=info msg="RemoveContainer for \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\"" Oct 2 19:31:48.916725 env[1141]: time="2023-10-02T19:31:48.916680207Z" level=info msg="RemoveContainer for \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\"" Oct 2 19:31:48.916818 env[1141]: time="2023-10-02T19:31:48.916759415Z" level=error msg="RemoveContainer for \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\" failed" error="failed to set removing state for container \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\": container is already in removing state" Oct 2 19:31:48.916912 kubelet[1440]: E1002 19:31:48.916895 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\": container is already in removing state" containerID="48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663" Oct 2 19:31:48.916963 kubelet[1440]: E1002 19:31:48.916929 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663": container is already in removing state; Skipping pod "cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)" Oct 2 19:31:48.916996 kubelet[1440]: E1002 19:31:48.916989 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:48.917243 kubelet[1440]: E1002 19:31:48.917224 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:31:48.918429 env[1141]: time="2023-10-02T19:31:48.918399312Z" level=info msg="RemoveContainer for \"48eadf18df4a5a9e016d05233fe1c6b315284a487735770025a8b372d411a663\" returns successfully" Oct 2 19:31:49.757998 kubelet[1440]: E1002 19:31:49.757965 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.759170 kubelet[1440]: E1002 19:31:50.759142 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.760413 kubelet[1440]: E1002 19:31:51.760383 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.971040 kubelet[1440]: W1002 19:31:51.970999 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice/cri-containerd-8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22.scope WatchSource:0}: task 8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22 not found: not found Oct 2 19:31:52.761450 kubelet[1440]: E1002 19:31:52.761390 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.762498 kubelet[1440]: E1002 19:31:53.762458 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.762908 kubelet[1440]: E1002 19:31:54.762872 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.709120 kubelet[1440]: E1002 19:31:55.709032 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.763859 kubelet[1440]: E1002 19:31:55.763825 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:56.765281 kubelet[1440]: E1002 19:31:56.765235 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.765716 kubelet[1440]: E1002 19:31:57.765652 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:58.766035 kubelet[1440]: E1002 19:31:58.765999 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.767635 kubelet[1440]: E1002 19:31:59.767590 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.767919 kubelet[1440]: E1002 19:32:00.767867 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.768173 kubelet[1440]: E1002 19:32:01.768141 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.800822 kubelet[1440]: E1002 19:32:01.800795 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:01.801235 kubelet[1440]: E1002 19:32:01.801219 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:32:02.769271 kubelet[1440]: E1002 19:32:02.769237 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.769997 kubelet[1440]: E1002 19:32:03.769934 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.770205 kubelet[1440]: E1002 19:32:04.770158 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.770539 kubelet[1440]: E1002 19:32:05.770475 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:06.771292 kubelet[1440]: E1002 19:32:06.771252 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.772112 kubelet[1440]: E1002 19:32:07.772052 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.772405 kubelet[1440]: E1002 19:32:08.772342 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.773376 kubelet[1440]: E1002 19:32:09.773328 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.801128 kubelet[1440]: E1002 19:32:09.801085 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:10.774086 kubelet[1440]: E1002 19:32:10.774040 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:11.775195 kubelet[1440]: E1002 19:32:11.775136 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:12.775340 kubelet[1440]: E1002 19:32:12.775305 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.776011 kubelet[1440]: E1002 19:32:13.775949 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.776790 kubelet[1440]: E1002 19:32:14.776734 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.708351 kubelet[1440]: E1002 19:32:15.708286 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.776838 kubelet[1440]: E1002 19:32:15.776792 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.777730 kubelet[1440]: E1002 19:32:16.777663 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.800394 kubelet[1440]: E1002 19:32:16.800362 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:16.800695 kubelet[1440]: E1002 19:32:16.800676 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:32:17.778174 kubelet[1440]: E1002 19:32:17.778079 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.778313 kubelet[1440]: E1002 19:32:18.778270 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.779910 kubelet[1440]: E1002 19:32:19.779783 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.780033 kubelet[1440]: E1002 19:32:20.780002 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.780583 kubelet[1440]: E1002 19:32:21.780537 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.780770 kubelet[1440]: E1002 19:32:22.780723 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.781085 kubelet[1440]: E1002 19:32:23.781028 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.781752 kubelet[1440]: E1002 19:32:24.781690 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.782638 kubelet[1440]: E1002 19:32:25.782591 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:26.785902 kubelet[1440]: E1002 19:32:26.785868 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.786666 kubelet[1440]: E1002 19:32:27.786644 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.787489 kubelet[1440]: E1002 19:32:28.787439 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.788331 kubelet[1440]: E1002 19:32:29.788297 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.789253 kubelet[1440]: E1002 19:32:30.789224 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.790460 kubelet[1440]: E1002 19:32:31.790401 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.801201 kubelet[1440]: E1002 19:32:31.801175 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:31.803552 env[1141]: time="2023-10-02T19:32:31.803484767Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:32:31.811235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1074541309.mount: Deactivated successfully. Oct 2 19:32:31.814716 env[1141]: time="2023-10-02T19:32:31.814670017Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\"" Oct 2 19:32:31.815438 env[1141]: time="2023-10-02T19:32:31.815406959Z" level=info msg="StartContainer for \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\"" Oct 2 19:32:31.841988 systemd[1]: Started cri-containerd-14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7.scope. Oct 2 19:32:31.858142 systemd[1]: cri-containerd-14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7.scope: Deactivated successfully. Oct 2 19:32:31.867312 env[1141]: time="2023-10-02T19:32:31.867252269Z" level=info msg="shim disconnected" id=14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7 Oct 2 19:32:31.867564 env[1141]: time="2023-10-02T19:32:31.867542942Z" level=warning msg="cleaning up after shim disconnected" id=14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7 namespace=k8s.io Oct 2 19:32:31.867664 env[1141]: time="2023-10-02T19:32:31.867648539Z" level=info msg="cleaning up dead shim" Oct 2 19:32:31.878000 env[1141]: time="2023-10-02T19:32:31.877940851Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1930 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:31.878501 env[1141]: time="2023-10-02T19:32:31.878434159Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:32:31.879222 env[1141]: time="2023-10-02T19:32:31.879188021Z" level=error msg="Failed to pipe stderr of container \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\"" error="reading from a closed fifo" Oct 2 19:32:31.879342 env[1141]: time="2023-10-02T19:32:31.879215340Z" level=error msg="Failed to pipe stdout of container \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\"" error="reading from a closed fifo" Oct 2 19:32:31.880885 env[1141]: time="2023-10-02T19:32:31.880832141Z" level=error msg="StartContainer for \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:31.881167 kubelet[1440]: E1002 19:32:31.881144 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7" Oct 2 19:32:31.881269 kubelet[1440]: E1002 19:32:31.881252 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:31.881269 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:31.881269 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:32:31.881269 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zrprl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:31.881399 kubelet[1440]: E1002 19:32:31.881297 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:32:31.977110 kubelet[1440]: I1002 19:32:31.977058 1440 scope.go:115] "RemoveContainer" containerID="8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22" Oct 2 19:32:31.977464 kubelet[1440]: I1002 19:32:31.977393 1440 scope.go:115] "RemoveContainer" containerID="8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22" Oct 2 19:32:31.978778 env[1141]: time="2023-10-02T19:32:31.978725580Z" level=info msg="RemoveContainer for \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\"" Oct 2 19:32:31.979114 env[1141]: time="2023-10-02T19:32:31.979075212Z" level=info msg="RemoveContainer for \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\"" Oct 2 19:32:31.979352 env[1141]: time="2023-10-02T19:32:31.979319166Z" level=error msg="RemoveContainer for \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\" failed" error="failed to set removing state for container \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\": container is already in removing state" Oct 2 19:32:31.980184 kubelet[1440]: E1002 19:32:31.979551 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\": container is already in removing state" containerID="8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22" Oct 2 19:32:31.980184 kubelet[1440]: E1002 19:32:31.979585 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22": container is already in removing state; Skipping pod "cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)" Oct 2 19:32:31.980184 kubelet[1440]: E1002 19:32:31.979674 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:31.980184 kubelet[1440]: E1002 19:32:31.979889 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:32:31.986250 env[1141]: time="2023-10-02T19:32:31.986207880Z" level=info msg="RemoveContainer for \"8b2de9392dcc58eb9fbd1f1214ee023a482026faeb4989e7557177831bc59e22\" returns successfully" Oct 2 19:32:32.790915 kubelet[1440]: E1002 19:32:32.790879 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.809146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7-rootfs.mount: Deactivated successfully. Oct 2 19:32:33.791455 kubelet[1440]: E1002 19:32:33.791391 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.792362 kubelet[1440]: E1002 19:32:34.792314 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.970777 kubelet[1440]: W1002 19:32:34.970715 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice/cri-containerd-14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7.scope WatchSource:0}: task 14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7 not found: not found Oct 2 19:32:35.708569 kubelet[1440]: E1002 19:32:35.708499 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:35.793299 kubelet[1440]: E1002 19:32:35.793261 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:36.794338 kubelet[1440]: E1002 19:32:36.794293 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:37.794730 kubelet[1440]: E1002 19:32:37.794689 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:38.795749 kubelet[1440]: E1002 19:32:38.795657 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:39.796743 kubelet[1440]: E1002 19:32:39.796687 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:40.796947 kubelet[1440]: E1002 19:32:40.796909 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:41.797986 kubelet[1440]: E1002 19:32:41.797948 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.799023 kubelet[1440]: E1002 19:32:42.798984 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.800894 kubelet[1440]: E1002 19:32:42.800848 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:42.801164 kubelet[1440]: E1002 19:32:42.801140 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:32:43.799857 kubelet[1440]: E1002 19:32:43.799822 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.800635 kubelet[1440]: E1002 19:32:44.800602 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:45.801673 kubelet[1440]: E1002 19:32:45.801641 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:46.802180 kubelet[1440]: E1002 19:32:46.802136 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:47.802246 kubelet[1440]: E1002 19:32:47.802189 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:48.802774 kubelet[1440]: E1002 19:32:48.802739 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:49.804177 kubelet[1440]: E1002 19:32:49.804149 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:50.804883 kubelet[1440]: E1002 19:32:50.804834 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:51.805336 kubelet[1440]: E1002 19:32:51.805291 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:52.806079 kubelet[1440]: E1002 19:32:52.806039 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.806468 kubelet[1440]: E1002 19:32:53.806406 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:54.800429 kubelet[1440]: E1002 19:32:54.800397 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:54.800834 kubelet[1440]: E1002 19:32:54.800820 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:32:54.807518 kubelet[1440]: E1002 19:32:54.807485 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.709208 kubelet[1440]: E1002 19:32:55.709174 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.755478 kubelet[1440]: E1002 19:32:55.755423 1440 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:32:55.782960 kubelet[1440]: E1002 19:32:55.782908 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:55.807898 kubelet[1440]: E1002 19:32:55.807870 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:56.809033 kubelet[1440]: E1002 19:32:56.808975 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:57.809792 kubelet[1440]: E1002 19:32:57.809758 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:58.810486 kubelet[1440]: E1002 19:32:58.810439 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:59.810703 kubelet[1440]: E1002 19:32:59.810672 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:00.784205 kubelet[1440]: E1002 19:33:00.784169 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:00.811915 kubelet[1440]: E1002 19:33:00.811857 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:01.812880 kubelet[1440]: E1002 19:33:01.812835 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:02.813683 kubelet[1440]: E1002 19:33:02.813632 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:03.814250 kubelet[1440]: E1002 19:33:03.814206 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.814502 kubelet[1440]: E1002 19:33:04.814452 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:05.784665 kubelet[1440]: E1002 19:33:05.784628 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:05.815171 kubelet[1440]: E1002 19:33:05.815127 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.800971 kubelet[1440]: E1002 19:33:06.800940 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:06.801395 kubelet[1440]: E1002 19:33:06.801377 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:33:06.815768 kubelet[1440]: E1002 19:33:06.815727 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:07.816157 kubelet[1440]: E1002 19:33:07.816131 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:08.823657 kubelet[1440]: E1002 19:33:08.823607 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.824827 kubelet[1440]: E1002 19:33:09.824781 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:10.785734 kubelet[1440]: E1002 19:33:10.785709 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:10.825102 kubelet[1440]: E1002 19:33:10.825065 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:11.825706 kubelet[1440]: E1002 19:33:11.825665 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.826349 kubelet[1440]: E1002 19:33:12.826312 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.827338 kubelet[1440]: E1002 19:33:13.827286 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:14.827845 kubelet[1440]: E1002 19:33:14.827804 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.708642 kubelet[1440]: E1002 19:33:15.708607 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.786091 kubelet[1440]: E1002 19:33:15.786066 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:15.828736 kubelet[1440]: E1002 19:33:15.828689 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:16.829210 kubelet[1440]: E1002 19:33:16.829158 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:17.829413 kubelet[1440]: E1002 19:33:17.829338 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:18.830073 kubelet[1440]: E1002 19:33:18.830037 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.831601 kubelet[1440]: E1002 19:33:19.831529 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:20.787919 kubelet[1440]: E1002 19:33:20.787268 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:20.800873 kubelet[1440]: E1002 19:33:20.800829 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:20.801107 kubelet[1440]: E1002 19:33:20.801048 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:33:20.832741 kubelet[1440]: E1002 19:33:20.832689 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.833410 kubelet[1440]: E1002 19:33:21.833353 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.833967 kubelet[1440]: E1002 19:33:22.833912 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.834274 kubelet[1440]: E1002 19:33:23.834206 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:24.835206 kubelet[1440]: E1002 19:33:24.835073 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:25.787718 kubelet[1440]: E1002 19:33:25.787642 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:25.835343 kubelet[1440]: E1002 19:33:25.835267 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:26.836116 kubelet[1440]: E1002 19:33:26.836018 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:27.836144 kubelet[1440]: E1002 19:33:27.836108 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:28.837155 kubelet[1440]: E1002 19:33:28.837123 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:29.837828 kubelet[1440]: E1002 19:33:29.837766 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:30.788289 kubelet[1440]: E1002 19:33:30.788228 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:30.841324 kubelet[1440]: E1002 19:33:30.841272 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:31.842432 kubelet[1440]: E1002 19:33:31.842401 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:32.801216 kubelet[1440]: E1002 19:33:32.801188 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:32.801981 kubelet[1440]: E1002 19:33:32.801963 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:32.802277 kubelet[1440]: E1002 19:33:32.802262 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:33:32.843082 kubelet[1440]: E1002 19:33:32.843049 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.844069 kubelet[1440]: E1002 19:33:33.844003 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:34.848119 kubelet[1440]: E1002 19:33:34.844598 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:35.708739 kubelet[1440]: E1002 19:33:35.708702 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:35.788864 kubelet[1440]: E1002 19:33:35.788835 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:35.848684 kubelet[1440]: E1002 19:33:35.848645 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:36.849189 kubelet[1440]: E1002 19:33:36.849156 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:37.850650 kubelet[1440]: E1002 19:33:37.850608 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:38.851179 kubelet[1440]: E1002 19:33:38.851141 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:39.851973 kubelet[1440]: E1002 19:33:39.851929 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:40.790091 kubelet[1440]: E1002 19:33:40.790065 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:40.852773 kubelet[1440]: E1002 19:33:40.852740 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:41.853993 kubelet[1440]: E1002 19:33:41.853947 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:42.854086 kubelet[1440]: E1002 19:33:42.854050 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:43.854891 kubelet[1440]: E1002 19:33:43.854849 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:44.855312 kubelet[1440]: E1002 19:33:44.855268 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:45.791454 kubelet[1440]: E1002 19:33:45.791425 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:45.855936 kubelet[1440]: E1002 19:33:45.855893 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:46.800359 kubelet[1440]: E1002 19:33:46.800330 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:46.800748 kubelet[1440]: E1002 19:33:46.800733 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:33:46.856414 kubelet[1440]: E1002 19:33:46.856380 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.857277 kubelet[1440]: E1002 19:33:47.857237 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:48.857901 kubelet[1440]: E1002 19:33:48.857858 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:49.858223 kubelet[1440]: E1002 19:33:49.858136 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:50.792503 kubelet[1440]: E1002 19:33:50.792466 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:50.859043 kubelet[1440]: E1002 19:33:50.859003 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:51.859258 kubelet[1440]: E1002 19:33:51.859213 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:52.860106 kubelet[1440]: E1002 19:33:52.860047 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:53.860491 kubelet[1440]: E1002 19:33:53.860453 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:54.860877 kubelet[1440]: E1002 19:33:54.860841 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:55.708941 kubelet[1440]: E1002 19:33:55.708900 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:55.793353 kubelet[1440]: E1002 19:33:55.793325 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:55.861044 kubelet[1440]: E1002 19:33:55.861007 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:56.862529 kubelet[1440]: E1002 19:33:56.862467 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:57.800740 kubelet[1440]: E1002 19:33:57.800697 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:57.803153 env[1141]: time="2023-10-02T19:33:57.803109121Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:33:57.810056 env[1141]: time="2023-10-02T19:33:57.810010286Z" level=info msg="CreateContainer within sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\"" Oct 2 19:33:57.810425 env[1141]: time="2023-10-02T19:33:57.810399537Z" level=info msg="StartContainer for \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\"" Oct 2 19:33:57.826980 systemd[1]: Started cri-containerd-6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a.scope. Oct 2 19:33:57.844011 systemd[1]: cri-containerd-6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a.scope: Deactivated successfully. Oct 2 19:33:57.847356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a-rootfs.mount: Deactivated successfully. Oct 2 19:33:57.851718 env[1141]: time="2023-10-02T19:33:57.851662398Z" level=info msg="shim disconnected" id=6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a Oct 2 19:33:57.851718 env[1141]: time="2023-10-02T19:33:57.851712520Z" level=warning msg="cleaning up after shim disconnected" id=6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a namespace=k8s.io Oct 2 19:33:57.851903 env[1141]: time="2023-10-02T19:33:57.851722960Z" level=info msg="cleaning up dead shim" Oct 2 19:33:57.859597 env[1141]: time="2023-10-02T19:33:57.859544671Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1975 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:57.859855 env[1141]: time="2023-10-02T19:33:57.859796279Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:33:57.860202 env[1141]: time="2023-10-02T19:33:57.860168130Z" level=error msg="Failed to pipe stderr of container \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\"" error="reading from a closed fifo" Oct 2 19:33:57.860724 env[1141]: time="2023-10-02T19:33:57.860682305Z" level=error msg="Failed to pipe stdout of container \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\"" error="reading from a closed fifo" Oct 2 19:33:57.861841 env[1141]: time="2023-10-02T19:33:57.861689975Z" level=error msg="StartContainer for \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:57.862045 kubelet[1440]: E1002 19:33:57.862018 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a" Oct 2 19:33:57.862303 kubelet[1440]: E1002 19:33:57.862286 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:57.862303 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:57.862303 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:33:57.862303 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zrprl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:57.862447 kubelet[1440]: E1002 19:33:57.862331 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:33:57.863082 kubelet[1440]: E1002 19:33:57.863063 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:58.117308 kubelet[1440]: I1002 19:33:58.117219 1440 scope.go:115] "RemoveContainer" containerID="14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7" Oct 2 19:33:58.117704 kubelet[1440]: I1002 19:33:58.117648 1440 scope.go:115] "RemoveContainer" containerID="14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7" Oct 2 19:33:58.119455 env[1141]: time="2023-10-02T19:33:58.119245250Z" level=info msg="RemoveContainer for \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\"" Oct 2 19:33:58.119628 env[1141]: time="2023-10-02T19:33:58.119591060Z" level=info msg="RemoveContainer for \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\"" Oct 2 19:33:58.119787 env[1141]: time="2023-10-02T19:33:58.119754665Z" level=error msg="RemoveContainer for \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\" failed" error="failed to set removing state for container \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\": container is already in removing state" Oct 2 19:33:58.120014 kubelet[1440]: E1002 19:33:58.119987 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\": container is already in removing state" containerID="14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7" Oct 2 19:33:58.120088 kubelet[1440]: E1002 19:33:58.120018 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7": container is already in removing state; Skipping pod "cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)" Oct 2 19:33:58.120088 kubelet[1440]: E1002 19:33:58.120071 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:58.120303 kubelet[1440]: E1002 19:33:58.120291 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-nmrtg_kube-system(e85c9b85-6cb8-4246-9860-75a733298aad)\"" pod="kube-system/cilium-nmrtg" podUID=e85c9b85-6cb8-4246-9860-75a733298aad Oct 2 19:33:58.122083 env[1141]: time="2023-10-02T19:33:58.122056373Z" level=info msg="RemoveContainer for \"14d29a95366d7e708b2c6357d4f210a990d08e58a09d9871e8b8bb576f0362c7\" returns successfully" Oct 2 19:33:58.864245 kubelet[1440]: E1002 19:33:58.864189 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:59.864527 kubelet[1440]: E1002 19:33:59.864465 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:00.794588 kubelet[1440]: E1002 19:34:00.794466 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:00.864636 kubelet[1440]: E1002 19:34:00.864570 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:00.955858 kubelet[1440]: W1002 19:34:00.955820 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice/cri-containerd-6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a.scope WatchSource:0}: task 6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a not found: not found Oct 2 19:34:01.865732 kubelet[1440]: E1002 19:34:01.865696 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:02.867056 kubelet[1440]: E1002 19:34:02.867019 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:03.867999 kubelet[1440]: E1002 19:34:03.867964 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:03.931282 env[1141]: time="2023-10-02T19:34:03.931239141Z" level=info msg="StopPodSandbox for \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\"" Oct 2 19:34:03.931627 env[1141]: time="2023-10-02T19:34:03.931303503Z" level=info msg="Container to stop \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:34:03.932764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe-shm.mount: Deactivated successfully. Oct 2 19:34:03.942000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:34:03.942195 systemd[1]: cri-containerd-0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe.scope: Deactivated successfully. Oct 2 19:34:03.944498 kernel: kauditd_printk_skb: 307 callbacks suppressed Oct 2 19:34:03.944562 kernel: audit: type=1334 audit(1696275243.942:643): prog-id=68 op=UNLOAD Oct 2 19:34:03.947000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:34:03.949504 kernel: audit: type=1334 audit(1696275243.947:644): prog-id=71 op=UNLOAD Oct 2 19:34:03.971008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe-rootfs.mount: Deactivated successfully. Oct 2 19:34:03.979395 env[1141]: time="2023-10-02T19:34:03.979338796Z" level=info msg="shim disconnected" id=0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe Oct 2 19:34:03.979395 env[1141]: time="2023-10-02T19:34:03.979388318Z" level=warning msg="cleaning up after shim disconnected" id=0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe namespace=k8s.io Oct 2 19:34:03.979395 env[1141]: time="2023-10-02T19:34:03.979399038Z" level=info msg="cleaning up dead shim" Oct 2 19:34:03.989064 env[1141]: time="2023-10-02T19:34:03.989014729Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2006 runtime=io.containerd.runc.v2\n" Oct 2 19:34:03.989391 env[1141]: time="2023-10-02T19:34:03.989351499Z" level=info msg="TearDown network for sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" successfully" Oct 2 19:34:03.989391 env[1141]: time="2023-10-02T19:34:03.989381740Z" level=info msg="StopPodSandbox for \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" returns successfully" Oct 2 19:34:04.081650 kubelet[1440]: I1002 19:34:04.081606 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-lib-modules\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081662 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-config-path\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081684 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-run\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081701 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-net\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081721 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-hubble-tls\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081742 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e85c9b85-6cb8-4246-9860-75a733298aad-clustermesh-secrets\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081760 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-bpf-maps\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081778 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cni-path\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081793 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-hostproc\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.081823 kubelet[1440]: I1002 19:34:04.081811 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-kernel\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.082029 kubelet[1440]: I1002 19:34:04.081831 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrprl\" (UniqueName: \"kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-kube-api-access-zrprl\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.082029 kubelet[1440]: I1002 19:34:04.081848 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-cgroup\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.082029 kubelet[1440]: I1002 19:34:04.081866 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-xtables-lock\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.082029 kubelet[1440]: I1002 19:34:04.081883 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-etc-cni-netd\") pod \"e85c9b85-6cb8-4246-9860-75a733298aad\" (UID: \"e85c9b85-6cb8-4246-9860-75a733298aad\") " Oct 2 19:34:04.082029 kubelet[1440]: I1002 19:34:04.081939 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082029 kubelet[1440]: I1002 19:34:04.081976 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082204 kubelet[1440]: W1002 19:34:04.082180 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e85c9b85-6cb8-4246-9860-75a733298aad/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082216 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cni-path" (OuterVolumeSpecName: "cni-path") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082258 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082470 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082478 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-hostproc" (OuterVolumeSpecName: "hostproc") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082495 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082510 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082519 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.082548 kubelet[1440]: I1002 19:34:04.082528 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:04.084048 kubelet[1440]: I1002 19:34:04.083993 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:34:04.085985 systemd[1]: var-lib-kubelet-pods-e85c9b85\x2d6cb8\x2d4246\x2d9860\x2d75a733298aad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrprl.mount: Deactivated successfully. Oct 2 19:34:04.086086 systemd[1]: var-lib-kubelet-pods-e85c9b85\x2d6cb8\x2d4246\x2d9860\x2d75a733298aad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:34:04.086855 kubelet[1440]: I1002 19:34:04.086712 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-kube-api-access-zrprl" (OuterVolumeSpecName: "kube-api-access-zrprl") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "kube-api-access-zrprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:34:04.087403 kubelet[1440]: I1002 19:34:04.087379 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85c9b85-6cb8-4246-9860-75a733298aad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:34:04.087735 kubelet[1440]: I1002 19:34:04.087697 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e85c9b85-6cb8-4246-9860-75a733298aad" (UID: "e85c9b85-6cb8-4246-9860-75a733298aad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:34:04.087716 systemd[1]: var-lib-kubelet-pods-e85c9b85\x2d6cb8\x2d4246\x2d9860\x2d75a733298aad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:34:04.130889 kubelet[1440]: I1002 19:34:04.130778 1440 scope.go:115] "RemoveContainer" containerID="6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a" Oct 2 19:34:04.133947 env[1141]: time="2023-10-02T19:34:04.133872922Z" level=info msg="RemoveContainer for \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\"" Oct 2 19:34:04.134842 systemd[1]: Removed slice kubepods-burstable-pode85c9b85_6cb8_4246_9860_75a733298aad.slice. Oct 2 19:34:04.139517 env[1141]: time="2023-10-02T19:34:04.139481613Z" level=info msg="RemoveContainer for \"6dbafdb700bddcb7862abe135fec81aabf0528ecc0b8e8a99a194cd9d8f7b88a\" returns successfully" Oct 2 19:34:04.182890 kubelet[1440]: I1002 19:34:04.182852 1440 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e85c9b85-6cb8-4246-9860-75a733298aad-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183239 kubelet[1440]: I1002 19:34:04.183180 1440 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183354 kubelet[1440]: I1002 19:34:04.183343 1440 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183421 kubelet[1440]: I1002 19:34:04.183413 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183492 kubelet[1440]: I1002 19:34:04.183484 1440 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183558 kubelet[1440]: I1002 19:34:04.183550 1440 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183629 kubelet[1440]: I1002 19:34:04.183613 1440 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183697 kubelet[1440]: I1002 19:34:04.183689 1440 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zrprl\" (UniqueName: \"kubernetes.io/projected/e85c9b85-6cb8-4246-9860-75a733298aad-kube-api-access-zrprl\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183754 kubelet[1440]: I1002 19:34:04.183747 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183817 kubelet[1440]: I1002 19:34:04.183802 1440 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183872 kubelet[1440]: I1002 19:34:04.183865 1440 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183934 kubelet[1440]: I1002 19:34:04.183920 1440 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.183994 kubelet[1440]: I1002 19:34:04.183987 1440 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e85c9b85-6cb8-4246-9860-75a733298aad-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.184054 kubelet[1440]: I1002 19:34:04.184042 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e85c9b85-6cb8-4246-9860-75a733298aad-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:34:04.869486 kubelet[1440]: E1002 19:34:04.869451 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:05.795559 kubelet[1440]: E1002 19:34:05.795532 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:05.801958 kubelet[1440]: I1002 19:34:05.801937 1440 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e85c9b85-6cb8-4246-9860-75a733298aad path="/var/lib/kubelet/pods/e85c9b85-6cb8-4246-9860-75a733298aad/volumes" Oct 2 19:34:05.870715 kubelet[1440]: E1002 19:34:05.870684 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:06.687198 kubelet[1440]: I1002 19:34:06.687154 1440 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:34:06.687198 kubelet[1440]: E1002 19:34:06.687205 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: E1002 19:34:06.687214 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: E1002 19:34:06.687223 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: E1002 19:34:06.687229 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: E1002 19:34:06.687235 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: I1002 19:34:06.687250 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: I1002 19:34:06.687258 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: I1002 19:34:06.687264 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.687416 kubelet[1440]: I1002 19:34:06.687270 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.691649 systemd[1]: Created slice kubepods-besteffort-pod7ecd9162_e143_4224_93bb_13c35b233f11.slice. Oct 2 19:34:06.705673 kubelet[1440]: I1002 19:34:06.705633 1440 topology_manager.go:212] "Topology Admit Handler" Oct 2 19:34:06.705804 kubelet[1440]: E1002 19:34:06.705697 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.705804 kubelet[1440]: I1002 19:34:06.705719 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.705804 kubelet[1440]: I1002 19:34:06.705727 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="e85c9b85-6cb8-4246-9860-75a733298aad" containerName="mount-cgroup" Oct 2 19:34:06.710247 systemd[1]: Created slice kubepods-burstable-pod9ec7cd66_c9a5_49cf_8739_3f0fd159173b.slice. Oct 2 19:34:06.712159 kubelet[1440]: W1002 19:34:06.712127 1440 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.13' and this object Oct 2 19:34:06.712159 kubelet[1440]: E1002 19:34:06.712165 1440 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.13' and this object Oct 2 19:34:06.712285 kubelet[1440]: W1002 19:34:06.712135 1440 reflector.go:533] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.13' and this object Oct 2 19:34:06.712285 kubelet[1440]: E1002 19:34:06.712184 1440 reflector.go:148] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.13' and this object Oct 2 19:34:06.712452 kubelet[1440]: W1002 19:34:06.712423 1440 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.13' and this object Oct 2 19:34:06.712533 kubelet[1440]: E1002 19:34:06.712524 1440 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.13' and this object Oct 2 19:34:06.798620 kubelet[1440]: I1002 19:34:06.798582 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hostproc\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.798840 kubelet[1440]: I1002 19:34:06.798828 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-cgroup\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.798935 kubelet[1440]: I1002 19:34:06.798926 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-etc-cni-netd\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799029 kubelet[1440]: I1002 19:34:06.799020 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-xtables-lock\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799142 kubelet[1440]: I1002 19:34:06.799131 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-run\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799267 kubelet[1440]: I1002 19:34:06.799245 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cni-path\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799312 kubelet[1440]: I1002 19:34:06.799290 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-config-path\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799312 kubelet[1440]: I1002 19:34:06.799312 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-net\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799367 kubelet[1440]: I1002 19:34:06.799343 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hubble-tls\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799367 kubelet[1440]: I1002 19:34:06.799362 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpwl\" (UniqueName: \"kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-kube-api-access-wlpwl\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799417 kubelet[1440]: I1002 19:34:06.799384 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ecd9162-e143-4224-93bb-13c35b233f11-cilium-config-path\") pod \"cilium-operator-574c4bb98d-lk7j8\" (UID: \"7ecd9162-e143-4224-93bb-13c35b233f11\") " pod="kube-system/cilium-operator-574c4bb98d-lk7j8" Oct 2 19:34:06.799417 kubelet[1440]: I1002 19:34:06.799411 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf2rs\" (UniqueName: \"kubernetes.io/projected/7ecd9162-e143-4224-93bb-13c35b233f11-kube-api-access-sf2rs\") pod \"cilium-operator-574c4bb98d-lk7j8\" (UID: \"7ecd9162-e143-4224-93bb-13c35b233f11\") " pod="kube-system/cilium-operator-574c4bb98d-lk7j8" Oct 2 19:34:06.799467 kubelet[1440]: I1002 19:34:06.799431 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-bpf-maps\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799467 kubelet[1440]: I1002 19:34:06.799449 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-lib-modules\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799512 kubelet[1440]: I1002 19:34:06.799468 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-clustermesh-secrets\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799512 kubelet[1440]: I1002 19:34:06.799487 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-ipsec-secrets\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.799512 kubelet[1440]: I1002 19:34:06.799506 1440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-kernel\") pod \"cilium-vrqdw\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " pod="kube-system/cilium-vrqdw" Oct 2 19:34:06.872062 kubelet[1440]: E1002 19:34:06.872029 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:06.994280 kubelet[1440]: E1002 19:34:06.994258 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:06.995123 env[1141]: time="2023-10-02T19:34:06.995071045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-lk7j8,Uid:7ecd9162-e143-4224-93bb-13c35b233f11,Namespace:kube-system,Attempt:0,}" Oct 2 19:34:07.008606 env[1141]: time="2023-10-02T19:34:07.008420773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:34:07.008606 env[1141]: time="2023-10-02T19:34:07.008462655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:34:07.008606 env[1141]: time="2023-10-02T19:34:07.008473015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:34:07.008799 env[1141]: time="2023-10-02T19:34:07.008632140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279 pid=2031 runtime=io.containerd.runc.v2 Oct 2 19:34:07.020866 systemd[1]: Started cri-containerd-79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279.scope. Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.095477 kernel: audit: type=1400 audit(1696275247.090:645): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.095568 kernel: audit: type=1400 audit(1696275247.090:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.095598 kernel: audit: type=1400 audit(1696275247.090:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.099208 kernel: audit: type=1400 audit(1696275247.090:648): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.099267 kernel: audit: type=1400 audit(1696275247.090:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.100947 kernel: audit: type=1400 audit(1696275247.090:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.102583 kernel: audit: type=1400 audit(1696275247.090:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.104301 kernel: audit: type=1400 audit(1696275247.090:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.090000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.092000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.092000 audit: BPF prog-id=75 op=LOAD Oct 2 19:34:07.092000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.092000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=2031 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:07.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739663336333165643231346439376532316634333531666531636661 Oct 2 19:34:07.092000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.092000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=2031 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:07.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739663336333165643231346439376532316634333531666531636661 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.093000 audit: BPF prog-id=76 op=LOAD Oct 2 19:34:07.093000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=2031 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:07.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739663336333165643231346439376532316634333531666531636661 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.094000 audit: BPF prog-id=77 op=LOAD Oct 2 19:34:07.094000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=2031 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:07.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739663336333165643231346439376532316634333531666531636661 Oct 2 19:34:07.096000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:34:07.096000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:07.096000 audit: BPF prog-id=78 op=LOAD Oct 2 19:34:07.096000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=2031 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:07.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739663336333165643231346439376532316634333531666531636661 Oct 2 19:34:07.130286 env[1141]: time="2023-10-02T19:34:07.130231342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-lk7j8,Uid:7ecd9162-e143-4224-93bb-13c35b233f11,Namespace:kube-system,Attempt:0,} returns sandbox id \"79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279\"" Oct 2 19:34:07.131124 kubelet[1440]: E1002 19:34:07.130926 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:07.131745 env[1141]: time="2023-10-02T19:34:07.131713588Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:34:07.872662 kubelet[1440]: E1002 19:34:07.872612 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.901817 1440 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.901911 1440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-clustermesh-secrets podName:9ec7cd66-c9a5-49cf-8739-3f0fd159173b nodeName:}" failed. No retries permitted until 2023-10-02 19:34:08.401889684 +0000 UTC m=+193.705344015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-clustermesh-secrets") pod "cilium-vrqdw" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b") : failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.901818 1440 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.902089 1440 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-vrqdw: failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.902184 1440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hubble-tls podName:9ec7cd66-c9a5-49cf-8739-3f0fd159173b nodeName:}" failed. No retries permitted until 2023-10-02 19:34:08.402174453 +0000 UTC m=+193.705628784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hubble-tls") pod "cilium-vrqdw" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b") : failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.901834 1440 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.902272 kubelet[1440]: E1002 19:34:07.902221 1440 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-ipsec-secrets podName:9ec7cd66-c9a5-49cf-8739-3f0fd159173b nodeName:}" failed. No retries permitted until 2023-10-02 19:34:08.402213894 +0000 UTC m=+193.705668225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-ipsec-secrets") pod "cilium-vrqdw" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b") : failed to sync secret cache: timed out waiting for the condition Oct 2 19:34:07.930619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782651270.mount: Deactivated successfully. Oct 2 19:34:08.456394 env[1141]: time="2023-10-02T19:34:08.456347456Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:08.457535 env[1141]: time="2023-10-02T19:34:08.457491211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:08.459393 env[1141]: time="2023-10-02T19:34:08.459365348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:08.459891 env[1141]: time="2023-10-02T19:34:08.459860523Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:34:08.461970 env[1141]: time="2023-10-02T19:34:08.461933107Z" level=info msg="CreateContainer within sandbox \"79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:34:08.472376 env[1141]: time="2023-10-02T19:34:08.472328666Z" level=info msg="CreateContainer within sandbox \"79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\"" Oct 2 19:34:08.473031 env[1141]: time="2023-10-02T19:34:08.472993647Z" level=info msg="StartContainer for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\"" Oct 2 19:34:08.487841 systemd[1]: Started cri-containerd-26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335.scope. Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.517000 audit: BPF prog-id=79 op=LOAD Oct 2 19:34:08.519298 kubelet[1440]: E1002 19:34:08.519276 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2031 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613532363734643763636663343130336333353730383261326532 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2031 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613532363734643763636663343130336333353730383261326532 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit: BPF prog-id=80 op=LOAD Oct 2 19:34:08.518000 audit[2075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2031 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613532363734643763636663343130336333353730383261326532 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit: BPF prog-id=81 op=LOAD Oct 2 19:34:08.518000 audit[2075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2031 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613532363734643763636663343130336333353730383261326532 Oct 2 19:34:08.518000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:34:08.518000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { perfmon } for pid=2075 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit[2075]: AVC avc: denied { bpf } for pid=2075 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.518000 audit: BPF prog-id=82 op=LOAD Oct 2 19:34:08.518000 audit[2075]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2031 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613532363734643763636663343130336333353730383261326532 Oct 2 19:34:08.520808 env[1141]: time="2023-10-02T19:34:08.519755922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrqdw,Uid:9ec7cd66-c9a5-49cf-8739-3f0fd159173b,Namespace:kube-system,Attempt:0,}" Oct 2 19:34:08.542309 env[1141]: time="2023-10-02T19:34:08.541385546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:34:08.542309 env[1141]: time="2023-10-02T19:34:08.541424427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:34:08.542309 env[1141]: time="2023-10-02T19:34:08.541435108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:34:08.542309 env[1141]: time="2023-10-02T19:34:08.541583232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9 pid=2104 runtime=io.containerd.runc.v2 Oct 2 19:34:08.546901 env[1141]: time="2023-10-02T19:34:08.546844634Z" level=info msg="StartContainer for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" returns successfully" Oct 2 19:34:08.557833 systemd[1]: Started cri-containerd-7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9.scope. Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.577000 audit: BPF prog-id=83 op=LOAD Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2104 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663533303130353837656361616539613736636239393965396632 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2104 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663533303130353837656361616539613736636239393965396632 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit: BPF prog-id=84 op=LOAD Oct 2 19:34:08.578000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2104 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663533303130353837656361616539613736636239393965396632 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit: BPF prog-id=85 op=LOAD Oct 2 19:34:08.578000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2104 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663533303130353837656361616539613736636239393965396632 Oct 2 19:34:08.578000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:34:08.578000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { perfmon } for pid=2120 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit[2120]: AVC avc: denied { bpf } for pid=2120 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:08.578000 audit: BPF prog-id=86 op=LOAD Oct 2 19:34:08.578000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2104 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:08.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663533303130353837656361616539613736636239393965396632 Oct 2 19:34:08.592894 env[1141]: time="2023-10-02T19:34:08.592852086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrqdw,Uid:9ec7cd66-c9a5-49cf-8739-3f0fd159173b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\"" Oct 2 19:34:08.593654 kubelet[1440]: E1002 19:34:08.593632 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:08.595703 env[1141]: time="2023-10-02T19:34:08.595667852Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:34:08.603000 audit[2086]: AVC avc: denied { map_create } for pid=2086 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c42,c793 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c42,c793 tclass=bpf permissive=0 Oct 2 19:34:08.603000 audit[2086]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400029b768 a2=48 a3=0 items=0 ppid=2031 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c42,c793 key=(null) Oct 2 19:34:08.603000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:34:08.612092 env[1141]: time="2023-10-02T19:34:08.612045755Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\"" Oct 2 19:34:08.613088 env[1141]: time="2023-10-02T19:34:08.613053506Z" level=info msg="StartContainer for \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\"" Oct 2 19:34:08.631457 systemd[1]: Started cri-containerd-685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e.scope. Oct 2 19:34:08.652591 systemd[1]: cri-containerd-685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e.scope: Deactivated successfully. Oct 2 19:34:08.734814 env[1141]: time="2023-10-02T19:34:08.734673479Z" level=info msg="shim disconnected" id=685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e Oct 2 19:34:08.734814 env[1141]: time="2023-10-02T19:34:08.734735001Z" level=warning msg="cleaning up after shim disconnected" id=685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e namespace=k8s.io Oct 2 19:34:08.734814 env[1141]: time="2023-10-02T19:34:08.734754642Z" level=info msg="cleaning up dead shim" Oct 2 19:34:08.744306 env[1141]: time="2023-10-02T19:34:08.744250413Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2172 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:08.744576 env[1141]: time="2023-10-02T19:34:08.744502061Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:34:08.744749 env[1141]: time="2023-10-02T19:34:08.744700067Z" level=error msg="Failed to pipe stdout of container \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\"" error="reading from a closed fifo" Oct 2 19:34:08.744879 env[1141]: time="2023-10-02T19:34:08.744834511Z" level=error msg="Failed to pipe stderr of container \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\"" error="reading from a closed fifo" Oct 2 19:34:08.746862 env[1141]: time="2023-10-02T19:34:08.746798812Z" level=error msg="StartContainer for \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:08.747440 kubelet[1440]: E1002 19:34:08.747085 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e" Oct 2 19:34:08.747440 kubelet[1440]: E1002 19:34:08.747196 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:08.747440 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:08.747440 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:08.747440 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wlpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:08.747440 kubelet[1440]: E1002 19:34:08.747233 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:08.872978 kubelet[1440]: E1002 19:34:08.872936 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.141378 kubelet[1440]: E1002 19:34:09.141085 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:09.142715 kubelet[1440]: E1002 19:34:09.142437 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:09.144466 env[1141]: time="2023-10-02T19:34:09.144428189Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:34:09.154322 env[1141]: time="2023-10-02T19:34:09.154264452Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\"" Oct 2 19:34:09.158269 env[1141]: time="2023-10-02T19:34:09.158006807Z" level=info msg="StartContainer for \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\"" Oct 2 19:34:09.158483 kubelet[1440]: I1002 19:34:09.158452 1440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-lk7j8" podStartSLOduration=1.8292904540000001 podCreationTimestamp="2023-10-02 19:34:06 +0000 UTC" firstStartedPulling="2023-10-02 19:34:07.131429459 +0000 UTC m=+192.434883790" lastFinishedPulling="2023-10-02 19:34:08.460550425 +0000 UTC m=+193.764004716" observedRunningTime="2023-10-02 19:34:09.155024076 +0000 UTC m=+194.458478407" watchObservedRunningTime="2023-10-02 19:34:09.15841138 +0000 UTC m=+194.461865711" Oct 2 19:34:09.179640 systemd[1]: Started cri-containerd-3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678.scope. Oct 2 19:34:09.199493 systemd[1]: cri-containerd-3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678.scope: Deactivated successfully. Oct 2 19:34:09.219236 env[1141]: time="2023-10-02T19:34:09.219184370Z" level=info msg="shim disconnected" id=3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678 Oct 2 19:34:09.219493 env[1141]: time="2023-10-02T19:34:09.219474219Z" level=warning msg="cleaning up after shim disconnected" id=3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678 namespace=k8s.io Oct 2 19:34:09.219584 env[1141]: time="2023-10-02T19:34:09.219569742Z" level=info msg="cleaning up dead shim" Oct 2 19:34:09.228292 env[1141]: time="2023-10-02T19:34:09.228237169Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2209 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:09.228729 env[1141]: time="2023-10-02T19:34:09.228662382Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:34:09.229423 env[1141]: time="2023-10-02T19:34:09.228920750Z" level=error msg="Failed to pipe stderr of container \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\"" error="reading from a closed fifo" Oct 2 19:34:09.229423 env[1141]: time="2023-10-02T19:34:09.229164278Z" level=error msg="Failed to pipe stdout of container \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\"" error="reading from a closed fifo" Oct 2 19:34:09.231873 env[1141]: time="2023-10-02T19:34:09.231822159Z" level=error msg="StartContainer for \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:09.232308 kubelet[1440]: E1002 19:34:09.232092 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678" Oct 2 19:34:09.232308 kubelet[1440]: E1002 19:34:09.232228 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:09.232308 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:09.232308 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:09.232308 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wlpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:09.232308 kubelet[1440]: E1002 19:34:09.232279 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:09.873578 kubelet[1440]: E1002 19:34:09.873530 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.927530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678-rootfs.mount: Deactivated successfully. Oct 2 19:34:10.145311 kubelet[1440]: I1002 19:34:10.145144 1440 scope.go:115] "RemoveContainer" containerID="685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e" Oct 2 19:34:10.146040 kubelet[1440]: E1002 19:34:10.145820 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:10.146040 kubelet[1440]: I1002 19:34:10.146015 1440 scope.go:115] "RemoveContainer" containerID="685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e" Oct 2 19:34:10.147132 env[1141]: time="2023-10-02T19:34:10.147078502Z" level=info msg="RemoveContainer for \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\"" Oct 2 19:34:10.148013 env[1141]: time="2023-10-02T19:34:10.147978689Z" level=info msg="RemoveContainer for \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\"" Oct 2 19:34:10.148106 env[1141]: time="2023-10-02T19:34:10.148067772Z" level=error msg="RemoveContainer for \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\" failed" error="failed to set removing state for container \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\": container is already in removing state" Oct 2 19:34:10.148266 kubelet[1440]: E1002 19:34:10.148244 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\": container is already in removing state" containerID="685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e" Oct 2 19:34:10.148362 kubelet[1440]: E1002 19:34:10.148352 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e": container is already in removing state; Skipping pod "cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)" Oct 2 19:34:10.148477 kubelet[1440]: E1002 19:34:10.148466 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:10.148763 kubelet[1440]: E1002 19:34:10.148749 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:10.149971 env[1141]: time="2023-10-02T19:34:10.149930870Z" level=info msg="RemoveContainer for \"685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e\" returns successfully" Oct 2 19:34:10.796590 kubelet[1440]: E1002 19:34:10.796558 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:10.874201 kubelet[1440]: E1002 19:34:10.874164 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:11.840505 kubelet[1440]: W1002 19:34:11.840448 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec7cd66_c9a5_49cf_8739_3f0fd159173b.slice/cri-containerd-685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e.scope WatchSource:0}: container "685af5a1c569ad507cabb8a3ab93171a77b345aad710070b459a92b4884ae86e" in namespace "k8s.io": not found Oct 2 19:34:11.874875 kubelet[1440]: E1002 19:34:11.874820 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:12.875731 kubelet[1440]: E1002 19:34:12.875666 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.876213 kubelet[1440]: E1002 19:34:13.876176 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:14.877501 kubelet[1440]: E1002 19:34:14.877461 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:14.948588 kubelet[1440]: W1002 19:34:14.948553 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec7cd66_c9a5_49cf_8739_3f0fd159173b.slice/cri-containerd-3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678.scope WatchSource:0}: task 3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678 not found: not found Oct 2 19:34:15.709053 kubelet[1440]: E1002 19:34:15.709011 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.798037 kubelet[1440]: E1002 19:34:15.797997 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:15.878003 kubelet[1440]: E1002 19:34:15.877967 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:16.878657 kubelet[1440]: E1002 19:34:16.878613 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:17.879327 kubelet[1440]: E1002 19:34:17.879292 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:18.880465 kubelet[1440]: E1002 19:34:18.880423 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:19.881199 kubelet[1440]: E1002 19:34:19.881147 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:20.801706 kubelet[1440]: E1002 19:34:20.799576 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:20.882081 kubelet[1440]: E1002 19:34:20.882029 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:21.801043 kubelet[1440]: E1002 19:34:21.800937 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:21.803715 env[1141]: time="2023-10-02T19:34:21.803085062Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:34:21.812668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713829423.mount: Deactivated successfully. Oct 2 19:34:21.819967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817251217.mount: Deactivated successfully. Oct 2 19:34:21.823920 env[1141]: time="2023-10-02T19:34:21.823870119Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\"" Oct 2 19:34:21.824420 env[1141]: time="2023-10-02T19:34:21.824392216Z" level=info msg="StartContainer for \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\"" Oct 2 19:34:21.847731 systemd[1]: Started cri-containerd-894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59.scope. Oct 2 19:34:21.865079 systemd[1]: cri-containerd-894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59.scope: Deactivated successfully. Oct 2 19:34:21.874325 env[1141]: time="2023-10-02T19:34:21.874269511Z" level=info msg="shim disconnected" id=894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59 Oct 2 19:34:21.874325 env[1141]: time="2023-10-02T19:34:21.874327473Z" level=warning msg="cleaning up after shim disconnected" id=894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59 namespace=k8s.io Oct 2 19:34:21.874531 env[1141]: time="2023-10-02T19:34:21.874336953Z" level=info msg="cleaning up dead shim" Oct 2 19:34:21.884261 kubelet[1440]: E1002 19:34:21.882339 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:21.885120 env[1141]: time="2023-10-02T19:34:21.885059092Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2245 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:21.885397 env[1141]: time="2023-10-02T19:34:21.885338981Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:34:21.885548 env[1141]: time="2023-10-02T19:34:21.885509066Z" level=error msg="Failed to pipe stdout of container \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\"" error="reading from a closed fifo" Oct 2 19:34:21.888146 env[1141]: time="2023-10-02T19:34:21.886581900Z" level=error msg="Failed to pipe stderr of container \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\"" error="reading from a closed fifo" Oct 2 19:34:21.890820 env[1141]: time="2023-10-02T19:34:21.890761792Z" level=error msg="StartContainer for \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:21.891332 kubelet[1440]: E1002 19:34:21.891140 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59" Oct 2 19:34:21.891332 kubelet[1440]: E1002 19:34:21.891266 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:21.891332 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:21.891332 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:21.891332 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wlpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:21.891332 kubelet[1440]: E1002 19:34:21.891312 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:22.168138 kubelet[1440]: I1002 19:34:22.167611 1440 scope.go:115] "RemoveContainer" containerID="3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678" Oct 2 19:34:22.168138 kubelet[1440]: I1002 19:34:22.168027 1440 scope.go:115] "RemoveContainer" containerID="3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678" Oct 2 19:34:22.169632 env[1141]: time="2023-10-02T19:34:22.169582689Z" level=info msg="RemoveContainer for \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\"" Oct 2 19:34:22.170301 env[1141]: time="2023-10-02T19:34:22.170041183Z" level=info msg="RemoveContainer for \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\"" Oct 2 19:34:22.170301 env[1141]: time="2023-10-02T19:34:22.170186628Z" level=error msg="RemoveContainer for \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\" failed" error="failed to set removing state for container \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\": container is already in removing state" Oct 2 19:34:22.170578 kubelet[1440]: E1002 19:34:22.170558 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\": container is already in removing state" containerID="3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678" Oct 2 19:34:22.170690 kubelet[1440]: E1002 19:34:22.170679 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678": container is already in removing state; Skipping pod "cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)" Oct 2 19:34:22.170829 kubelet[1440]: E1002 19:34:22.170809 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:22.171179 kubelet[1440]: E1002 19:34:22.171163 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:22.172881 env[1141]: time="2023-10-02T19:34:22.172848792Z" level=info msg="RemoveContainer for \"3cdb640f350e813b160416c9c0848e186487c5fc66b8599abbd0e4fdb70d3678\" returns successfully" Oct 2 19:34:22.809378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59-rootfs.mount: Deactivated successfully. Oct 2 19:34:22.882591 kubelet[1440]: E1002 19:34:22.882535 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:23.883489 kubelet[1440]: E1002 19:34:23.883459 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:24.884018 kubelet[1440]: E1002 19:34:24.883980 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:24.979796 kubelet[1440]: W1002 19:34:24.979749 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec7cd66_c9a5_49cf_8739_3f0fd159173b.slice/cri-containerd-894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59.scope WatchSource:0}: task 894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59 not found: not found Oct 2 19:34:25.800514 kubelet[1440]: E1002 19:34:25.800484 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:25.885337 kubelet[1440]: E1002 19:34:25.885291 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:26.886334 kubelet[1440]: E1002 19:34:26.886271 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:27.887345 kubelet[1440]: E1002 19:34:27.887314 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:28.888673 kubelet[1440]: E1002 19:34:28.888637 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:29.890109 kubelet[1440]: E1002 19:34:29.890056 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:30.801481 kubelet[1440]: E1002 19:34:30.801454 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:30.891016 kubelet[1440]: E1002 19:34:30.890976 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:31.891391 kubelet[1440]: E1002 19:34:31.891344 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:32.800285 kubelet[1440]: E1002 19:34:32.800248 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:32.800514 kubelet[1440]: E1002 19:34:32.800481 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:32.891959 kubelet[1440]: E1002 19:34:32.891927 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:33.892667 kubelet[1440]: E1002 19:34:33.892631 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:34.893331 kubelet[1440]: E1002 19:34:34.893295 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.709313 kubelet[1440]: E1002 19:34:35.709275 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.802120 kubelet[1440]: E1002 19:34:35.802075 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:35.893984 kubelet[1440]: E1002 19:34:35.893934 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:36.894980 kubelet[1440]: E1002 19:34:36.894930 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:37.895647 kubelet[1440]: E1002 19:34:37.895609 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.896016 kubelet[1440]: E1002 19:34:38.895973 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:39.896776 kubelet[1440]: E1002 19:34:39.896733 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:40.803837 kubelet[1440]: E1002 19:34:40.803802 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:40.897138 kubelet[1440]: E1002 19:34:40.897089 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:41.897561 kubelet[1440]: E1002 19:34:41.897515 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:42.897966 kubelet[1440]: E1002 19:34:42.897922 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:43.800931 kubelet[1440]: E1002 19:34:43.800895 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:43.802887 env[1141]: time="2023-10-02T19:34:43.802847980Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:34:43.814542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068501477.mount: Deactivated successfully. Oct 2 19:34:43.817359 env[1141]: time="2023-10-02T19:34:43.817239394Z" level=info msg="CreateContainer within sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\"" Oct 2 19:34:43.818123 env[1141]: time="2023-10-02T19:34:43.817782619Z" level=info msg="StartContainer for \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\"" Oct 2 19:34:43.837933 systemd[1]: Started cri-containerd-25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882.scope. Oct 2 19:34:43.860075 systemd[1]: cri-containerd-25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882.scope: Deactivated successfully. Oct 2 19:34:43.866350 env[1141]: time="2023-10-02T19:34:43.866289237Z" level=info msg="shim disconnected" id=25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882 Oct 2 19:34:43.866350 env[1141]: time="2023-10-02T19:34:43.866346515Z" level=warning msg="cleaning up after shim disconnected" id=25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882 namespace=k8s.io Oct 2 19:34:43.866350 env[1141]: time="2023-10-02T19:34:43.866355595Z" level=info msg="cleaning up dead shim" Oct 2 19:34:43.875364 env[1141]: time="2023-10-02T19:34:43.874881966Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2286 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:43.875364 env[1141]: time="2023-10-02T19:34:43.875144199Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:34:43.875518 env[1141]: time="2023-10-02T19:34:43.875383633Z" level=error msg="Failed to pipe stderr of container \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\"" error="reading from a closed fifo" Oct 2 19:34:43.875617 env[1141]: time="2023-10-02T19:34:43.875587227Z" level=error msg="Failed to pipe stdout of container \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\"" error="reading from a closed fifo" Oct 2 19:34:43.878620 env[1141]: time="2023-10-02T19:34:43.878562307Z" level=error msg="StartContainer for \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:43.878819 kubelet[1440]: E1002 19:34:43.878791 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882" Oct 2 19:34:43.878946 kubelet[1440]: E1002 19:34:43.878897 1440 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:43.878946 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:43.878946 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:43.878946 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wlpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:43.878946 kubelet[1440]: E1002 19:34:43.878935 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:43.898140 kubelet[1440]: E1002 19:34:43.898092 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:44.204292 kubelet[1440]: I1002 19:34:44.203585 1440 scope.go:115] "RemoveContainer" containerID="894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59" Oct 2 19:34:44.204292 kubelet[1440]: I1002 19:34:44.203891 1440 scope.go:115] "RemoveContainer" containerID="894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59" Oct 2 19:34:44.206476 env[1141]: time="2023-10-02T19:34:44.206404429Z" level=info msg="RemoveContainer for \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\"" Oct 2 19:34:44.207447 env[1141]: time="2023-10-02T19:34:44.207414203Z" level=info msg="RemoveContainer for \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\"" Oct 2 19:34:44.208671 env[1141]: time="2023-10-02T19:34:44.208615211Z" level=error msg="RemoveContainer for \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\" failed" error="failed to set removing state for container \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\": container is already in removing state" Oct 2 19:34:44.208964 kubelet[1440]: E1002 19:34:44.208945 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\": container is already in removing state" containerID="894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59" Oct 2 19:34:44.209029 kubelet[1440]: E1002 19:34:44.208983 1440 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59": container is already in removing state; Skipping pod "cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)" Oct 2 19:34:44.209060 kubelet[1440]: E1002 19:34:44.209047 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:44.209338 kubelet[1440]: E1002 19:34:44.209315 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:44.213546 env[1141]: time="2023-10-02T19:34:44.213510404Z" level=info msg="RemoveContainer for \"894a82b6d8292ca34f3aad61130c9c315b498b87dd70d26702ac057ea9b71f59\" returns successfully" Oct 2 19:34:44.812370 systemd[1]: run-containerd-runc-k8s.io-25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882-runc.kQhilP.mount: Deactivated successfully. Oct 2 19:34:44.812466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882-rootfs.mount: Deactivated successfully. Oct 2 19:34:44.899277 kubelet[1440]: E1002 19:34:44.899224 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:45.804898 kubelet[1440]: E1002 19:34:45.804869 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:45.900867 kubelet[1440]: E1002 19:34:45.900256 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:46.901557 kubelet[1440]: E1002 19:34:46.901511 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:46.971823 kubelet[1440]: W1002 19:34:46.971774 1440 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec7cd66_c9a5_49cf_8739_3f0fd159173b.slice/cri-containerd-25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882.scope WatchSource:0}: task 25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882 not found: not found Oct 2 19:34:47.902647 kubelet[1440]: E1002 19:34:47.902612 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:47.957411 update_engine[1131]: I1002 19:34:47.957340 1131 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:34:47.957411 update_engine[1131]: I1002 19:34:47.957381 1131 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:34:47.957958 update_engine[1131]: I1002 19:34:47.957926 1131 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:34:47.958382 update_engine[1131]: I1002 19:34:47.958352 1131 omaha_request_params.cc:62] Current group set to lts Oct 2 19:34:47.958524 update_engine[1131]: I1002 19:34:47.958501 1131 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:34:47.958524 update_engine[1131]: I1002 19:34:47.958509 1131 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:34:47.958592 update_engine[1131]: I1002 19:34:47.958534 1131 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:34:47.958592 update_engine[1131]: I1002 19:34:47.958562 1131 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:34:47.959306 locksmithd[1173]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:34:47.960962 update_engine[1131]: I1002 19:34:47.960940 1131 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:34:47.960962 update_engine[1131]: I1002 19:34:47.960956 1131 omaha_request_action.cc:269] Request: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: Oct 2 19:34:47.960962 update_engine[1131]: I1002 19:34:47.960961 1131 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:34:47.962087 update_engine[1131]: I1002 19:34:47.962038 1131 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:34:47.962242 update_engine[1131]: I1002 19:34:47.962229 1131 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:34:48.903706 kubelet[1440]: E1002 19:34:48.903664 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:49.118392 update_engine[1131]: I1002 19:34:49.118341 1131 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:34:49.118687 update_engine[1131]: I1002 19:34:49.118599 1131 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:34:49.119119 update_engine[1131]: I1002 19:34:49.119081 1131 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:34:49.410437 update_engine[1131]: I1002 19:34:49.410359 1131 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:34:49.411968 update_engine[1131]: I1002 19:34:49.411933 1131 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:34:49.411968 update_engine[1131]: I1002 19:34:49.411955 1131 omaha_request_action.cc:619] Omaha request response: Oct 2 19:34:49.411968 update_engine[1131]: Oct 2 19:34:49.414186 update_engine[1131]: I1002 19:34:49.414155 1131 omaha_request_action.cc:409] No update. Oct 2 19:34:49.414186 update_engine[1131]: I1002 19:34:49.414180 1131 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:34:49.414186 update_engine[1131]: I1002 19:34:49.414186 1131 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:34:49.414186 update_engine[1131]: I1002 19:34:49.414188 1131 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:34:49.414186 update_engine[1131]: I1002 19:34:49.414191 1131 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:34:49.414332 update_engine[1131]: I1002 19:34:49.414194 1131 update_attempter.cc:302] Processing Done. Oct 2 19:34:49.414332 update_engine[1131]: I1002 19:34:49.414206 1131 update_attempter.cc:338] No update. Oct 2 19:34:49.414332 update_engine[1131]: I1002 19:34:49.414216 1131 update_check_scheduler.cc:74] Next update check in 48m26s Oct 2 19:34:49.414559 locksmithd[1173]: LastCheckedTime=1696275289 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:34:49.904864 kubelet[1440]: E1002 19:34:49.904811 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:50.806137 kubelet[1440]: E1002 19:34:50.806088 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:50.905263 kubelet[1440]: E1002 19:34:50.905225 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:51.801529 kubelet[1440]: E1002 19:34:51.801491 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:51.905792 kubelet[1440]: E1002 19:34:51.905748 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:52.906646 kubelet[1440]: E1002 19:34:52.906603 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.907315 kubelet[1440]: E1002 19:34:53.907279 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:54.908168 kubelet[1440]: E1002 19:34:54.908117 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.709238 kubelet[1440]: E1002 19:34:55.709161 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.726563 env[1141]: time="2023-10-02T19:34:55.726514794Z" level=info msg="StopPodSandbox for \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\"" Oct 2 19:34:55.726860 env[1141]: time="2023-10-02T19:34:55.726616632Z" level=info msg="TearDown network for sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" successfully" Oct 2 19:34:55.726860 env[1141]: time="2023-10-02T19:34:55.726647192Z" level=info msg="StopPodSandbox for \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" returns successfully" Oct 2 19:34:55.727180 env[1141]: time="2023-10-02T19:34:55.727148263Z" level=info msg="RemovePodSandbox for \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\"" Oct 2 19:34:55.727306 env[1141]: time="2023-10-02T19:34:55.727269781Z" level=info msg="Forcibly stopping sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\"" Oct 2 19:34:55.727432 env[1141]: time="2023-10-02T19:34:55.727405978Z" level=info msg="TearDown network for sandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" successfully" Oct 2 19:34:55.733010 env[1141]: time="2023-10-02T19:34:55.732977639Z" level=info msg="RemovePodSandbox \"0d087a01152f85056fd5c026bad5750a5e4677d5904379231ad8cb8efb6d43fe\" returns successfully" Oct 2 19:34:55.806742 kubelet[1440]: E1002 19:34:55.806714 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:55.908498 kubelet[1440]: E1002 19:34:55.908455 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:56.909621 kubelet[1440]: E1002 19:34:56.909532 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:57.802237 kubelet[1440]: E1002 19:34:57.802207 1440 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:57.802446 kubelet[1440]: E1002 19:34:57.802428 1440 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-vrqdw_kube-system(9ec7cd66-c9a5-49cf-8739-3f0fd159173b)\"" pod="kube-system/cilium-vrqdw" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b Oct 2 19:34:57.910186 kubelet[1440]: E1002 19:34:57.909880 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:58.910352 kubelet[1440]: E1002 19:34:58.910319 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:59.911721 kubelet[1440]: E1002 19:34:59.911683 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:00.809369 kubelet[1440]: E1002 19:35:00.809343 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:00.912328 kubelet[1440]: E1002 19:35:00.912294 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:01.913568 kubelet[1440]: E1002 19:35:01.913534 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:02.914314 kubelet[1440]: E1002 19:35:02.914273 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.915201 kubelet[1440]: E1002 19:35:03.915168 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:04.916012 kubelet[1440]: E1002 19:35:04.915982 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:05.810493 kubelet[1440]: E1002 19:35:05.810465 1440 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:05.916513 kubelet[1440]: E1002 19:35:05.916472 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:06.917196 kubelet[1440]: E1002 19:35:06.917155 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.917745 kubelet[1440]: E1002 19:35:07.917687 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.948689 env[1141]: time="2023-10-02T19:35:07.948645425Z" level=info msg="StopPodSandbox for \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\"" Oct 2 19:35:07.949154 env[1141]: time="2023-10-02T19:35:07.949128740Z" level=info msg="Container to stop \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:07.950434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9-shm.mount: Deactivated successfully. Oct 2 19:35:07.957362 systemd[1]: cri-containerd-7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9.scope: Deactivated successfully. Oct 2 19:35:07.956000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:35:07.958154 kernel: kauditd_printk_skb: 166 callbacks suppressed Oct 2 19:35:07.958217 kernel: audit: type=1334 audit(1696275307.956:700): prog-id=83 op=UNLOAD Oct 2 19:35:07.962000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:35:07.964122 kernel: audit: type=1334 audit(1696275307.962:701): prog-id=86 op=UNLOAD Oct 2 19:35:07.973441 env[1141]: time="2023-10-02T19:35:07.973402011Z" level=info msg="StopContainer for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" with timeout 30 (s)" Oct 2 19:35:07.973828 env[1141]: time="2023-10-02T19:35:07.973801367Z" level=info msg="Stop container \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" with signal terminated" Oct 2 19:35:07.977029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9-rootfs.mount: Deactivated successfully. Oct 2 19:35:07.982784 env[1141]: time="2023-10-02T19:35:07.982732196Z" level=info msg="shim disconnected" id=7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9 Oct 2 19:35:07.983169 env[1141]: time="2023-10-02T19:35:07.983138192Z" level=warning msg="cleaning up after shim disconnected" id=7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9 namespace=k8s.io Oct 2 19:35:07.983169 env[1141]: time="2023-10-02T19:35:07.983163672Z" level=info msg="cleaning up dead shim" Oct 2 19:35:07.986189 systemd[1]: cri-containerd-26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335.scope: Deactivated successfully. Oct 2 19:35:07.985000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:35:07.988118 kernel: audit: type=1334 audit(1696275307.985:702): prog-id=79 op=UNLOAD Oct 2 19:35:07.990000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:35:07.992110 kernel: audit: type=1334 audit(1696275307.990:703): prog-id=82 op=UNLOAD Oct 2 19:35:07.995044 env[1141]: time="2023-10-02T19:35:07.994992751Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2327 runtime=io.containerd.runc.v2\n" Oct 2 19:35:07.995353 env[1141]: time="2023-10-02T19:35:07.995323907Z" level=info msg="TearDown network for sandbox \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" successfully" Oct 2 19:35:07.995401 env[1141]: time="2023-10-02T19:35:07.995355187Z" level=info msg="StopPodSandbox for \"7ff53010587ecaae9a76cb999e9f25179f6e6c1731516fc34b224448e8bd0ec9\" returns successfully" Oct 2 19:35:08.008607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335-rootfs.mount: Deactivated successfully. Oct 2 19:35:08.013957 env[1141]: time="2023-10-02T19:35:08.013911805Z" level=info msg="shim disconnected" id=26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335 Oct 2 19:35:08.013957 env[1141]: time="2023-10-02T19:35:08.013951324Z" level=warning msg="cleaning up after shim disconnected" id=26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335 namespace=k8s.io Oct 2 19:35:08.013957 env[1141]: time="2023-10-02T19:35:08.013960604Z" level=info msg="cleaning up dead shim" Oct 2 19:35:08.022270 env[1141]: time="2023-10-02T19:35:08.022225524Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Oct 2 19:35:08.023990 env[1141]: time="2023-10-02T19:35:08.023950748Z" level=info msg="StopContainer for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" returns successfully" Oct 2 19:35:08.024524 env[1141]: time="2023-10-02T19:35:08.024492382Z" level=info msg="StopPodSandbox for \"79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279\"" Oct 2 19:35:08.024571 env[1141]: time="2023-10-02T19:35:08.024548262Z" level=info msg="Container to stop \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:08.025717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279-shm.mount: Deactivated successfully. Oct 2 19:35:08.032576 systemd[1]: cri-containerd-79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279.scope: Deactivated successfully. Oct 2 19:35:08.031000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:35:08.034117 kernel: audit: type=1334 audit(1696275308.031:704): prog-id=75 op=UNLOAD Oct 2 19:35:08.036000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:35:08.038106 kernel: audit: type=1334 audit(1696275308.036:705): prog-id=78 op=UNLOAD Oct 2 19:35:08.051242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279-rootfs.mount: Deactivated successfully. Oct 2 19:35:08.056086 env[1141]: time="2023-10-02T19:35:08.056041358Z" level=info msg="shim disconnected" id=79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279 Oct 2 19:35:08.056230 env[1141]: time="2023-10-02T19:35:08.056086597Z" level=warning msg="cleaning up after shim disconnected" id=79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279 namespace=k8s.io Oct 2 19:35:08.056230 env[1141]: time="2023-10-02T19:35:08.056112277Z" level=info msg="cleaning up dead shim" Oct 2 19:35:08.064772 env[1141]: time="2023-10-02T19:35:08.064721474Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2382 runtime=io.containerd.runc.v2\n" Oct 2 19:35:08.065047 env[1141]: time="2023-10-02T19:35:08.065014271Z" level=info msg="TearDown network for sandbox \"79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279\" successfully" Oct 2 19:35:08.065086 env[1141]: time="2023-10-02T19:35:08.065040751Z" level=info msg="StopPodSandbox for \"79f3631ed214d97e21f4351fe1cfa793b0ee5aff2b0578f3551b560a66887279\" returns successfully" Oct 2 19:35:08.068814 kubelet[1440]: I1002 19:35:08.068787 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-cgroup\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068826 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-run\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068847 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-etc-cni-netd\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068866 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-kernel\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068893 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-net\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068888 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068888 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.068923 kubelet[1440]: I1002 19:35:08.068918 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlpwl\" (UniqueName: \"kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-kube-api-access-wlpwl\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068924 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068938 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-bpf-maps\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068930 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068911 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068967 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-ipsec-secrets\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068972 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.068987 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hostproc\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.069004 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cni-path\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.069027 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hubble-tls\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.069047 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-clustermesh-secrets\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.069064 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-xtables-lock\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069092 kubelet[1440]: I1002 19:35:08.069084 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-config-path\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069120 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-lib-modules\") pod \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\" (UID: \"9ec7cd66-c9a5-49cf-8739-3f0fd159173b\") " Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069147 1440 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069157 1440 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069166 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069180 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069190 1440 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069199 1440 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069221 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069357 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069447 kubelet[1440]: I1002 19:35:08.069437 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hostproc" (OuterVolumeSpecName: "hostproc") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069679 kubelet[1440]: I1002 19:35:08.069463 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cni-path" (OuterVolumeSpecName: "cni-path") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:08.069679 kubelet[1440]: W1002 19:35:08.069475 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9ec7cd66-c9a5-49cf-8739-3f0fd159173b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:35:08.071359 kubelet[1440]: I1002 19:35:08.071327 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:08.072127 kubelet[1440]: I1002 19:35:08.071792 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-kube-api-access-wlpwl" (OuterVolumeSpecName: "kube-api-access-wlpwl") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "kube-api-access-wlpwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:08.074145 kubelet[1440]: I1002 19:35:08.074109 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:08.074415 kubelet[1440]: I1002 19:35:08.074388 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:08.074664 kubelet[1440]: I1002 19:35:08.074635 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9ec7cd66-c9a5-49cf-8739-3f0fd159173b" (UID: "9ec7cd66-c9a5-49cf-8739-3f0fd159173b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169512 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf2rs\" (UniqueName: \"kubernetes.io/projected/7ecd9162-e143-4224-93bb-13c35b233f11-kube-api-access-sf2rs\") pod \"7ecd9162-e143-4224-93bb-13c35b233f11\" (UID: \"7ecd9162-e143-4224-93bb-13c35b233f11\") " Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169561 1440 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ecd9162-e143-4224-93bb-13c35b233f11-cilium-config-path\") pod \"7ecd9162-e143-4224-93bb-13c35b233f11\" (UID: \"7ecd9162-e143-4224-93bb-13c35b233f11\") " Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169587 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-ipsec-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169600 1440 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169610 1440 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169619 1440 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169629 1440 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169637 1440 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169646 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169654 1440 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.169653 kubelet[1440]: I1002 19:35:08.169665 1440 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wlpwl\" (UniqueName: \"kubernetes.io/projected/9ec7cd66-c9a5-49cf-8739-3f0fd159173b-kube-api-access-wlpwl\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.170001 kubelet[1440]: W1002 19:35:08.169847 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7ecd9162-e143-4224-93bb-13c35b233f11/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:35:08.171662 kubelet[1440]: I1002 19:35:08.171623 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ecd9162-e143-4224-93bb-13c35b233f11-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ecd9162-e143-4224-93bb-13c35b233f11" (UID: "7ecd9162-e143-4224-93bb-13c35b233f11"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:08.173075 kubelet[1440]: I1002 19:35:08.173048 1440 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ecd9162-e143-4224-93bb-13c35b233f11-kube-api-access-sf2rs" (OuterVolumeSpecName: "kube-api-access-sf2rs") pod "7ecd9162-e143-4224-93bb-13c35b233f11" (UID: "7ecd9162-e143-4224-93bb-13c35b233f11"). InnerVolumeSpecName "kube-api-access-sf2rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:08.249110 kubelet[1440]: I1002 19:35:08.249066 1440 scope.go:115] "RemoveContainer" containerID="26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335" Oct 2 19:35:08.251598 env[1141]: time="2023-10-02T19:35:08.251555669Z" level=info msg="RemoveContainer for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\"" Oct 2 19:35:08.253587 env[1141]: time="2023-10-02T19:35:08.253558890Z" level=info msg="RemoveContainer for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" returns successfully" Oct 2 19:35:08.253754 systemd[1]: Removed slice kubepods-besteffort-pod7ecd9162_e143_4224_93bb_13c35b233f11.slice. Oct 2 19:35:08.253918 kubelet[1440]: I1002 19:35:08.253746 1440 scope.go:115] "RemoveContainer" containerID="26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335" Oct 2 19:35:08.255153 systemd[1]: Removed slice kubepods-burstable-pod9ec7cd66_c9a5_49cf_8739_3f0fd159173b.slice. Oct 2 19:35:08.256242 env[1141]: time="2023-10-02T19:35:08.256165705Z" level=error msg="ContainerStatus for \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\": not found" Oct 2 19:35:08.256444 kubelet[1440]: E1002 19:35:08.256405 1440 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\": not found" containerID="26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335" Oct 2 19:35:08.256500 kubelet[1440]: I1002 19:35:08.256451 1440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335} err="failed to get container status \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\": rpc error: code = NotFound desc = an error occurred when try to find container \"26a52674d7ccfc4103c357082a2e2152cfd704982452b4af936b21c6fb337335\": not found" Oct 2 19:35:08.256500 kubelet[1440]: I1002 19:35:08.256462 1440 scope.go:115] "RemoveContainer" containerID="25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882" Oct 2 19:35:08.257306 env[1141]: time="2023-10-02T19:35:08.257274654Z" level=info msg="RemoveContainer for \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\"" Oct 2 19:35:08.258924 env[1141]: time="2023-10-02T19:35:08.258888798Z" level=info msg="RemoveContainer for \"25008ecad6f0866ed885d24bb25b7998bd45652c79a805d7d1ddac1b4ed4e882\" returns successfully" Oct 2 19:35:08.270301 kubelet[1440]: I1002 19:35:08.270279 1440 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sf2rs\" (UniqueName: \"kubernetes.io/projected/7ecd9162-e143-4224-93bb-13c35b233f11-kube-api-access-sf2rs\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.270407 kubelet[1440]: I1002 19:35:08.270397 1440 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ecd9162-e143-4224-93bb-13c35b233f11-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 19:35:08.918437 kubelet[1440]: E1002 19:35:08.918375 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.950412 systemd[1]: var-lib-kubelet-pods-9ec7cd66\x2dc9a5\x2d49cf\x2d8739\x2d3f0fd159173b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:08.950507 systemd[1]: var-lib-kubelet-pods-9ec7cd66\x2dc9a5\x2d49cf\x2d8739\x2d3f0fd159173b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:35:08.950569 systemd[1]: var-lib-kubelet-pods-9ec7cd66\x2dc9a5\x2d49cf\x2d8739\x2d3f0fd159173b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:08.950629 systemd[1]: var-lib-kubelet-pods-7ecd9162\x2de143\x2d4224\x2d93bb\x2d13c35b233f11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsf2rs.mount: Deactivated successfully. Oct 2 19:35:08.950679 systemd[1]: var-lib-kubelet-pods-9ec7cd66\x2dc9a5\x2d49cf\x2d8739\x2d3f0fd159173b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlpwl.mount: Deactivated successfully. Oct 2 19:35:09.803041 kubelet[1440]: I1002 19:35:09.803002 1440 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=7ecd9162-e143-4224-93bb-13c35b233f11 path="/var/lib/kubelet/pods/7ecd9162-e143-4224-93bb-13c35b233f11/volumes" Oct 2 19:35:09.803423 kubelet[1440]: I1002 19:35:09.803400 1440 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=9ec7cd66-c9a5-49cf-8739-3f0fd159173b path="/var/lib/kubelet/pods/9ec7cd66-c9a5-49cf-8739-3f0fd159173b/volumes" Oct 2 19:35:09.919379 kubelet[1440]: E1002 19:35:09.919347 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"