Oct 2 20:08:06.743343 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 20:08:06.743367 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 20:08:06.743375 kernel: efi: EFI v2.70 by EDK II Oct 2 20:08:06.743381 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 20:08:06.743386 kernel: random: crng init done Oct 2 20:08:06.743392 kernel: ACPI: Early table checksum verification disabled Oct 2 20:08:06.743398 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 20:08:06.743405 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 20:08:06.743411 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743416 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743422 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743427 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743433 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743438 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743446 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743452 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743458 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:08:06.743464 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 20:08:06.743470 kernel: NUMA: Failed to initialise from firmware Oct 2 20:08:06.743476 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 20:08:06.743481 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Oct 2 20:08:06.743487 kernel: Zone ranges: Oct 2 20:08:06.743493 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 20:08:06.743500 kernel: DMA32 empty Oct 2 20:08:06.743505 kernel: Normal empty Oct 2 20:08:06.743511 kernel: Movable zone start for each node Oct 2 20:08:06.743517 kernel: Early memory node ranges Oct 2 20:08:06.743523 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 20:08:06.743528 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 20:08:06.743534 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 20:08:06.743540 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 20:08:06.743545 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 20:08:06.743551 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 20:08:06.743557 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 20:08:06.743563 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 20:08:06.743570 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 20:08:06.743575 kernel: psci: probing for conduit method from ACPI. Oct 2 20:08:06.743581 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 20:08:06.743587 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 20:08:06.743599 kernel: psci: Trusted OS migration not required Oct 2 20:08:06.743609 kernel: psci: SMC Calling Convention v1.1 Oct 2 20:08:06.743617 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 20:08:06.743654 kernel: ACPI: SRAT not present Oct 2 20:08:06.743662 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 20:08:06.743669 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 20:08:06.743675 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 20:08:06.743681 kernel: Detected PIPT I-cache on CPU0 Oct 2 20:08:06.743688 kernel: CPU features: detected: GIC system register CPU interface Oct 2 20:08:06.743694 kernel: CPU features: detected: Hardware dirty bit management Oct 2 20:08:06.743700 kernel: CPU features: detected: Spectre-v4 Oct 2 20:08:06.743706 kernel: CPU features: detected: Spectre-BHB Oct 2 20:08:06.743714 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 20:08:06.743720 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 20:08:06.743726 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 20:08:06.743732 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 20:08:06.743738 kernel: Policy zone: DMA Oct 2 20:08:06.743746 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 20:08:06.743752 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 20:08:06.743759 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 20:08:06.743765 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 20:08:06.743771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 20:08:06.743778 kernel: Memory: 2459280K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113008K reserved, 0K cma-reserved) Oct 2 20:08:06.743785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 20:08:06.743791 kernel: trace event string verifier disabled Oct 2 20:08:06.743798 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 20:08:06.743804 kernel: rcu: RCU event tracing is enabled. Oct 2 20:08:06.743811 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 20:08:06.743817 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 20:08:06.743823 kernel: Tracing variant of Tasks RCU enabled. Oct 2 20:08:06.743830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 20:08:06.743836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 20:08:06.743842 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 20:08:06.743848 kernel: GICv3: 256 SPIs implemented Oct 2 20:08:06.743856 kernel: GICv3: 0 Extended SPIs implemented Oct 2 20:08:06.743862 kernel: GICv3: Distributor has no Range Selector support Oct 2 20:08:06.743868 kernel: Root IRQ handler: gic_handle_irq Oct 2 20:08:06.743874 kernel: GICv3: 16 PPIs implemented Oct 2 20:08:06.743881 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 20:08:06.743887 kernel: ACPI: SRAT not present Oct 2 20:08:06.743893 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 20:08:06.743899 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 20:08:06.743906 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 20:08:06.743912 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 20:08:06.743918 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 20:08:06.743924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 20:08:06.743932 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 20:08:06.743938 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 20:08:06.743944 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 20:08:06.743951 kernel: arm-pv: using stolen time PV Oct 2 20:08:06.743957 kernel: Console: colour dummy device 80x25 Oct 2 20:08:06.743964 kernel: ACPI: Core revision 20210730 Oct 2 20:08:06.743970 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 20:08:06.743977 kernel: pid_max: default: 32768 minimum: 301 Oct 2 20:08:06.743983 kernel: LSM: Security Framework initializing Oct 2 20:08:06.743989 kernel: SELinux: Initializing. Oct 2 20:08:06.743997 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 20:08:06.744003 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 20:08:06.744009 kernel: rcu: Hierarchical SRCU implementation. Oct 2 20:08:06.744016 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 20:08:06.744022 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 20:08:06.744028 kernel: Remapping and enabling EFI services. Oct 2 20:08:06.744034 kernel: smp: Bringing up secondary CPUs ... Oct 2 20:08:06.744041 kernel: Detected PIPT I-cache on CPU1 Oct 2 20:08:06.744047 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 20:08:06.744055 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 20:08:06.744061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 20:08:06.744068 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 20:08:06.744074 kernel: Detected PIPT I-cache on CPU2 Oct 2 20:08:06.744080 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 20:08:06.744087 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 20:08:06.744093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 20:08:06.744100 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 20:08:06.744106 kernel: Detected PIPT I-cache on CPU3 Oct 2 20:08:06.744112 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 20:08:06.744120 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 20:08:06.744126 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 20:08:06.744132 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 20:08:06.744139 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 20:08:06.744149 kernel: SMP: Total of 4 processors activated. Oct 2 20:08:06.744157 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 20:08:06.744164 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 20:08:06.744170 kernel: CPU features: detected: Common not Private translations Oct 2 20:08:06.744177 kernel: CPU features: detected: CRC32 instructions Oct 2 20:08:06.744184 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 20:08:06.744190 kernel: CPU features: detected: LSE atomic instructions Oct 2 20:08:06.744197 kernel: CPU features: detected: Privileged Access Never Oct 2 20:08:06.744205 kernel: CPU features: detected: RAS Extension Support Oct 2 20:08:06.744212 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 20:08:06.744219 kernel: CPU: All CPU(s) started at EL1 Oct 2 20:08:06.744235 kernel: alternatives: patching kernel code Oct 2 20:08:06.744242 kernel: devtmpfs: initialized Oct 2 20:08:06.744250 kernel: KASLR enabled Oct 2 20:08:06.744257 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 20:08:06.744264 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 20:08:06.744270 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 20:08:06.744277 kernel: SMBIOS 3.0.0 present. Oct 2 20:08:06.744284 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 20:08:06.744291 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 20:08:06.744297 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 20:08:06.744304 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 20:08:06.744312 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 20:08:06.744319 kernel: audit: initializing netlink subsys (disabled) Oct 2 20:08:06.744326 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Oct 2 20:08:06.744332 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 20:08:06.744339 kernel: cpuidle: using governor menu Oct 2 20:08:06.744346 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 20:08:06.744353 kernel: ASID allocator initialised with 32768 entries Oct 2 20:08:06.744360 kernel: ACPI: bus type PCI registered Oct 2 20:08:06.744366 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 20:08:06.744374 kernel: Serial: AMBA PL011 UART driver Oct 2 20:08:06.744381 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 20:08:06.744387 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 20:08:06.744394 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 20:08:06.744401 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 20:08:06.744407 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 20:08:06.744414 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 20:08:06.744421 kernel: ACPI: Added _OSI(Module Device) Oct 2 20:08:06.744427 kernel: ACPI: Added _OSI(Processor Device) Oct 2 20:08:06.744435 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 20:08:06.744442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 20:08:06.744448 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 20:08:06.744455 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 20:08:06.744462 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 20:08:06.744468 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 20:08:06.744475 kernel: ACPI: Interpreter enabled Oct 2 20:08:06.744481 kernel: ACPI: Using GIC for interrupt routing Oct 2 20:08:06.744488 kernel: ACPI: MCFG table detected, 1 entries Oct 2 20:08:06.744496 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 20:08:06.744503 kernel: printk: console [ttyAMA0] enabled Oct 2 20:08:06.744509 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 20:08:06.744659 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 20:08:06.744742 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 20:08:06.744807 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 20:08:06.747408 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 20:08:06.747486 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 20:08:06.747495 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 20:08:06.747502 kernel: PCI host bridge to bus 0000:00 Oct 2 20:08:06.747569 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 20:08:06.747639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 20:08:06.747695 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 20:08:06.747747 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 20:08:06.747821 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 20:08:06.747890 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 20:08:06.747950 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 20:08:06.748011 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 20:08:06.748071 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 20:08:06.748130 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 20:08:06.748189 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 20:08:06.748291 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 20:08:06.748347 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 20:08:06.748398 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 20:08:06.748449 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 20:08:06.748458 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 20:08:06.748465 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 20:08:06.748472 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 20:08:06.748481 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 20:08:06.748487 kernel: iommu: Default domain type: Translated Oct 2 20:08:06.748494 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 20:08:06.748501 kernel: vgaarb: loaded Oct 2 20:08:06.748508 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 20:08:06.748515 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 20:08:06.748521 kernel: PTP clock support registered Oct 2 20:08:06.748528 kernel: Registered efivars operations Oct 2 20:08:06.748534 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 20:08:06.748541 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 20:08:06.748549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 20:08:06.748561 kernel: pnp: PnP ACPI init Oct 2 20:08:06.748640 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 20:08:06.748651 kernel: pnp: PnP ACPI: found 1 devices Oct 2 20:08:06.748657 kernel: NET: Registered PF_INET protocol family Oct 2 20:08:06.748664 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 20:08:06.748671 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 20:08:06.748678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 20:08:06.748686 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 20:08:06.748693 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 20:08:06.748700 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 20:08:06.748706 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 20:08:06.748713 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 20:08:06.748720 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 20:08:06.748726 kernel: PCI: CLS 0 bytes, default 64 Oct 2 20:08:06.748733 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 20:08:06.748739 kernel: kvm [1]: HYP mode not available Oct 2 20:08:06.748747 kernel: Initialise system trusted keyrings Oct 2 20:08:06.748754 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 20:08:06.748760 kernel: Key type asymmetric registered Oct 2 20:08:06.748767 kernel: Asymmetric key parser 'x509' registered Oct 2 20:08:06.748774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 20:08:06.748780 kernel: io scheduler mq-deadline registered Oct 2 20:08:06.748787 kernel: io scheduler kyber registered Oct 2 20:08:06.748793 kernel: io scheduler bfq registered Oct 2 20:08:06.748800 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 20:08:06.748807 kernel: ACPI: button: Power Button [PWRB] Oct 2 20:08:06.748814 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 20:08:06.748874 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 20:08:06.748883 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 20:08:06.748890 kernel: thunder_xcv, ver 1.0 Oct 2 20:08:06.748896 kernel: thunder_bgx, ver 1.0 Oct 2 20:08:06.748903 kernel: nicpf, ver 1.0 Oct 2 20:08:06.748909 kernel: nicvf, ver 1.0 Oct 2 20:08:06.748980 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 20:08:06.749039 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T20:08:06 UTC (1696277286) Oct 2 20:08:06.749048 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 20:08:06.749054 kernel: NET: Registered PF_INET6 protocol family Oct 2 20:08:06.749061 kernel: Segment Routing with IPv6 Oct 2 20:08:06.749068 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 20:08:06.749074 kernel: NET: Registered PF_PACKET protocol family Oct 2 20:08:06.749081 kernel: Key type dns_resolver registered Oct 2 20:08:06.749088 kernel: registered taskstats version 1 Oct 2 20:08:06.749096 kernel: Loading compiled-in X.509 certificates Oct 2 20:08:06.749103 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 20:08:06.749109 kernel: Key type .fscrypt registered Oct 2 20:08:06.749116 kernel: Key type fscrypt-provisioning registered Oct 2 20:08:06.749122 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 20:08:06.749129 kernel: ima: Allocated hash algorithm: sha1 Oct 2 20:08:06.749135 kernel: ima: No architecture policies found Oct 2 20:08:06.749142 kernel: Freeing unused kernel memory: 34560K Oct 2 20:08:06.749148 kernel: Run /init as init process Oct 2 20:08:06.749156 kernel: with arguments: Oct 2 20:08:06.749163 kernel: /init Oct 2 20:08:06.749169 kernel: with environment: Oct 2 20:08:06.749175 kernel: HOME=/ Oct 2 20:08:06.749182 kernel: TERM=linux Oct 2 20:08:06.749189 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 20:08:06.749197 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:08:06.749206 systemd[1]: Detected virtualization kvm. Oct 2 20:08:06.749214 systemd[1]: Detected architecture arm64. Oct 2 20:08:06.749231 systemd[1]: Running in initrd. Oct 2 20:08:06.749238 systemd[1]: No hostname configured, using default hostname. Oct 2 20:08:06.749245 systemd[1]: Hostname set to . Oct 2 20:08:06.749252 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:08:06.749259 systemd[1]: Queued start job for default target initrd.target. Oct 2 20:08:06.749266 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:08:06.749273 systemd[1]: Reached target cryptsetup.target. Oct 2 20:08:06.749281 systemd[1]: Reached target paths.target. Oct 2 20:08:06.749288 systemd[1]: Reached target slices.target. Oct 2 20:08:06.749295 systemd[1]: Reached target swap.target. Oct 2 20:08:06.749302 systemd[1]: Reached target timers.target. Oct 2 20:08:06.749309 systemd[1]: Listening on iscsid.socket. Oct 2 20:08:06.749316 systemd[1]: Listening on iscsiuio.socket. Oct 2 20:08:06.749323 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 20:08:06.749331 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 20:08:06.749338 systemd[1]: Listening on systemd-journald.socket. Oct 2 20:08:06.749345 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:08:06.749352 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:08:06.749359 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:08:06.749366 systemd[1]: Reached target sockets.target. Oct 2 20:08:06.749373 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:08:06.749380 systemd[1]: Finished network-cleanup.service. Oct 2 20:08:06.749387 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 20:08:06.749395 systemd[1]: Starting systemd-journald.service... Oct 2 20:08:06.749403 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:08:06.749410 systemd[1]: Starting systemd-resolved.service... Oct 2 20:08:06.749417 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 20:08:06.749424 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:08:06.749431 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 20:08:06.749438 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:08:06.749445 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 20:08:06.749453 kernel: audit: type=1130 audit(1696277286.744:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.749461 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 20:08:06.749469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 20:08:06.749481 systemd-journald[290]: Journal started Oct 2 20:08:06.749525 systemd-journald[290]: Runtime Journal (/run/log/journal/0dce29da468647c7ac695fb56119b409) is 6.0M, max 48.7M, 42.6M free. Oct 2 20:08:06.749557 kernel: Bridge firewalling registered Oct 2 20:08:06.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.728822 systemd-modules-load[291]: Inserted module 'overlay' Oct 2 20:08:06.752313 systemd[1]: Started systemd-journald.service. Oct 2 20:08:06.750699 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 2 20:08:06.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.755853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:08:06.759607 kernel: audit: type=1130 audit(1696277286.755:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.765489 kernel: audit: type=1130 audit(1696277286.758:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.765509 kernel: SCSI subsystem initialized Oct 2 20:08:06.765497 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 20:08:06.770823 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 20:08:06.770839 kernel: device-mapper: uevent: version 1.0.3 Oct 2 20:08:06.770847 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 20:08:06.765843 systemd-resolved[292]: Positive Trust Anchors: Oct 2 20:08:06.775480 kernel: audit: type=1130 audit(1696277286.766:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.765850 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:08:06.765876 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:08:06.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.767354 systemd[1]: Starting dracut-cmdline.service... Oct 2 20:08:06.785740 kernel: audit: type=1130 audit(1696277286.777:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.785757 dracut-cmdline[309]: dracut-dracut-053 Oct 2 20:08:06.785757 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 20:08:06.770487 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 2 20:08:06.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.774909 systemd[1]: Started systemd-resolved.service. Oct 2 20:08:06.796067 kernel: audit: type=1130 audit(1696277286.791:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.775684 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 2 20:08:06.779490 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:08:06.791644 systemd[1]: Reached target nss-lookup.target. Oct 2 20:08:06.796064 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:08:06.804218 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:08:06.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.809246 kernel: audit: type=1130 audit(1696277286.804:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.859243 kernel: Loading iSCSI transport class v2.0-870. Oct 2 20:08:06.868249 kernel: iscsi: registered transport (tcp) Oct 2 20:08:06.884260 kernel: iscsi: registered transport (qla4xxx) Oct 2 20:08:06.884295 kernel: QLogic iSCSI HBA Driver Oct 2 20:08:06.928344 systemd[1]: Finished dracut-cmdline.service. Oct 2 20:08:06.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.932242 kernel: audit: type=1130 audit(1696277286.928:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:06.930000 systemd[1]: Starting dracut-pre-udev.service... Oct 2 20:08:06.979260 kernel: raid6: neonx8 gen() 13291 MB/s Oct 2 20:08:06.996257 kernel: raid6: neonx8 xor() 10519 MB/s Oct 2 20:08:07.019426 kernel: raid6: neonx4 gen() 13441 MB/s Oct 2 20:08:07.033670 kernel: raid6: neonx4 xor() 11129 MB/s Oct 2 20:08:07.047257 kernel: raid6: neonx2 gen() 12883 MB/s Oct 2 20:08:07.064250 kernel: raid6: neonx2 xor() 10070 MB/s Oct 2 20:08:07.081268 kernel: raid6: neonx1 gen() 10264 MB/s Oct 2 20:08:07.098256 kernel: raid6: neonx1 xor() 8447 MB/s Oct 2 20:08:07.115254 kernel: raid6: int64x8 gen() 6210 MB/s Oct 2 20:08:07.132256 kernel: raid6: int64x8 xor() 3518 MB/s Oct 2 20:08:07.149262 kernel: raid6: int64x4 gen() 7083 MB/s Oct 2 20:08:07.166266 kernel: raid6: int64x4 xor() 3788 MB/s Oct 2 20:08:07.183258 kernel: raid6: int64x2 gen() 6112 MB/s Oct 2 20:08:07.200254 kernel: raid6: int64x2 xor() 3254 MB/s Oct 2 20:08:07.217257 kernel: raid6: int64x1 gen() 4995 MB/s Oct 2 20:08:07.234397 kernel: raid6: int64x1 xor() 2643 MB/s Oct 2 20:08:07.234436 kernel: raid6: using algorithm neonx4 gen() 13441 MB/s Oct 2 20:08:07.234446 kernel: raid6: .... xor() 11129 MB/s, rmw enabled Oct 2 20:08:07.235483 kernel: raid6: using neon recovery algorithm Oct 2 20:08:07.246251 kernel: xor: measuring software checksum speed Oct 2 20:08:07.251529 kernel: 8regs : 17297 MB/sec Oct 2 20:08:07.251578 kernel: 32regs : 20755 MB/sec Oct 2 20:08:07.251595 kernel: arm64_neon : 27835 MB/sec Oct 2 20:08:07.251605 kernel: xor: using function: arm64_neon (27835 MB/sec) Oct 2 20:08:07.305266 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 20:08:07.318959 systemd[1]: Finished dracut-pre-udev.service. Oct 2 20:08:07.324082 kernel: audit: type=1130 audit(1696277287.319:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:07.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:07.322000 audit: BPF prog-id=7 op=LOAD Oct 2 20:08:07.323000 audit: BPF prog-id=8 op=LOAD Oct 2 20:08:07.324372 systemd[1]: Starting systemd-udevd.service... Oct 2 20:08:07.339599 systemd-udevd[492]: Using default interface naming scheme 'v252'. Oct 2 20:08:07.342952 systemd[1]: Started systemd-udevd.service. Oct 2 20:08:07.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:07.348208 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 20:08:07.361214 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Oct 2 20:08:07.394633 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 20:08:07.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:07.396315 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:08:07.431201 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:08:07.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:07.483651 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 20:08:07.486250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:08:07.498253 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (549) Oct 2 20:08:07.499086 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 20:08:07.503920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 20:08:07.504969 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 20:08:07.511358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 20:08:07.514742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:08:07.516455 systemd[1]: Starting disk-uuid.service... Oct 2 20:08:07.533351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:08:08.543247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:08:08.543847 disk-uuid[563]: The operation has completed successfully. Oct 2 20:08:08.569786 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 20:08:08.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.569879 systemd[1]: Finished disk-uuid.service. Oct 2 20:08:08.571446 systemd[1]: Starting verity-setup.service... Oct 2 20:08:08.587252 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 20:08:08.609678 systemd[1]: Found device dev-mapper-usr.device. Oct 2 20:08:08.611801 systemd[1]: Mounting sysusr-usr.mount... Oct 2 20:08:08.614513 systemd[1]: Finished verity-setup.service. Oct 2 20:08:08.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.659002 systemd[1]: Mounted sysusr-usr.mount. Oct 2 20:08:08.660335 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:08:08.659868 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 20:08:08.660557 systemd[1]: Starting ignition-setup.service... Oct 2 20:08:08.662783 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 20:08:08.670846 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 20:08:08.670894 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:08:08.670904 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:08:08.681468 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 20:08:08.754365 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 20:08:08.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.755000 audit: BPF prog-id=9 op=LOAD Oct 2 20:08:08.756508 systemd[1]: Starting systemd-networkd.service... Oct 2 20:08:08.776898 systemd-networkd[731]: lo: Link UP Oct 2 20:08:08.776910 systemd-networkd[731]: lo: Gained carrier Oct 2 20:08:08.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.777322 systemd-networkd[731]: Enumeration completed Oct 2 20:08:08.777445 systemd[1]: Started systemd-networkd.service. Oct 2 20:08:08.777517 systemd-networkd[731]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:08:08.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.778718 systemd[1]: Reached target network.target. Oct 2 20:08:08.779407 systemd-networkd[731]: eth0: Link UP Oct 2 20:08:08.779411 systemd-networkd[731]: eth0: Gained carrier Oct 2 20:08:08.781677 systemd[1]: Starting iscsiuio.service... Oct 2 20:08:08.782793 systemd[1]: Finished ignition-setup.service. Oct 2 20:08:08.784985 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 20:08:08.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.792524 systemd[1]: Started iscsiuio.service. Oct 2 20:08:08.794327 systemd[1]: Starting iscsid.service... Oct 2 20:08:08.798988 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:08:08.798988 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 20:08:08.798988 iscsid[739]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 20:08:08.798988 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 20:08:08.798988 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 20:08:08.798988 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:08:08.798988 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 20:08:08.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.807546 systemd[1]: Started iscsid.service. Oct 2 20:08:08.809726 systemd[1]: Starting dracut-initqueue.service... Oct 2 20:08:08.811438 systemd-networkd[731]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 20:08:08.825291 systemd[1]: Finished dracut-initqueue.service. Oct 2 20:08:08.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.826260 systemd[1]: Reached target remote-fs-pre.target. Oct 2 20:08:08.827686 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:08:08.829276 systemd[1]: Reached target remote-fs.target. Oct 2 20:08:08.831657 systemd[1]: Starting dracut-pre-mount.service... Oct 2 20:08:08.841891 systemd[1]: Finished dracut-pre-mount.service. Oct 2 20:08:08.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.897571 ignition[735]: Ignition 2.14.0 Oct 2 20:08:08.897581 ignition[735]: Stage: fetch-offline Oct 2 20:08:08.897629 ignition[735]: no configs at "/usr/lib/ignition/base.d" Oct 2 20:08:08.897638 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 20:08:08.897768 ignition[735]: parsed url from cmdline: "" Oct 2 20:08:08.897771 ignition[735]: no config URL provided Oct 2 20:08:08.897776 ignition[735]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:08:08.897783 ignition[735]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:08:08.897803 ignition[735]: op(1): [started] loading QEMU firmware config module Oct 2 20:08:08.897808 ignition[735]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 20:08:08.903309 ignition[735]: op(1): [finished] loading QEMU firmware config module Oct 2 20:08:08.922127 ignition[735]: parsing config with SHA512: 74699e69be074da8e6f1de9ef7f5a3d443d7781f89dbd16c0455c80c0104b05e80359c3e4275db9d9119f6a56aea87a33d125f5aefefb2b3ec5616344d264dc6 Oct 2 20:08:08.940817 unknown[735]: fetched base config from "system" Oct 2 20:08:08.941280 unknown[735]: fetched user config from "qemu" Oct 2 20:08:08.941775 ignition[735]: fetch-offline: fetch-offline passed Oct 2 20:08:08.942808 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 20:08:08.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.941841 ignition[735]: Ignition finished successfully Oct 2 20:08:08.944276 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 20:08:08.945005 systemd[1]: Starting ignition-kargs.service... Oct 2 20:08:08.955882 ignition[761]: Ignition 2.14.0 Oct 2 20:08:08.955892 ignition[761]: Stage: kargs Oct 2 20:08:08.955982 ignition[761]: no configs at "/usr/lib/ignition/base.d" Oct 2 20:08:08.958074 systemd[1]: Finished ignition-kargs.service. Oct 2 20:08:08.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.955992 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 20:08:08.956757 ignition[761]: kargs: kargs passed Oct 2 20:08:08.960405 systemd[1]: Starting ignition-disks.service... Oct 2 20:08:08.956797 ignition[761]: Ignition finished successfully Oct 2 20:08:08.968392 ignition[767]: Ignition 2.14.0 Oct 2 20:08:08.968402 ignition[767]: Stage: disks Oct 2 20:08:08.968489 ignition[767]: no configs at "/usr/lib/ignition/base.d" Oct 2 20:08:08.968499 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 20:08:08.969250 ignition[767]: disks: disks passed Oct 2 20:08:08.969293 ignition[767]: Ignition finished successfully Oct 2 20:08:08.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:08.972443 systemd[1]: Finished ignition-disks.service. Oct 2 20:08:08.973579 systemd[1]: Reached target initrd-root-device.target. Oct 2 20:08:08.974912 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:08:08.976185 systemd[1]: Reached target local-fs.target. Oct 2 20:08:08.977709 systemd[1]: Reached target sysinit.target. Oct 2 20:08:08.978978 systemd[1]: Reached target basic.target. Oct 2 20:08:08.981121 systemd[1]: Starting systemd-fsck-root.service... Oct 2 20:08:08.996369 systemd-fsck[776]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 20:08:09.000633 systemd[1]: Finished systemd-fsck-root.service. Oct 2 20:08:09.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:09.004075 systemd[1]: Mounting sysroot.mount... Oct 2 20:08:09.011234 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:08:09.011502 systemd[1]: Mounted sysroot.mount. Oct 2 20:08:09.012239 systemd[1]: Reached target initrd-root-fs.target. Oct 2 20:08:09.014419 systemd[1]: Mounting sysroot-usr.mount... Oct 2 20:08:09.015307 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 20:08:09.015347 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 20:08:09.015371 systemd[1]: Reached target ignition-diskful.target. Oct 2 20:08:09.017682 systemd[1]: Mounted sysroot-usr.mount. Oct 2 20:08:09.020659 systemd[1]: Starting initrd-setup-root.service... Oct 2 20:08:09.026914 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 20:08:09.031276 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Oct 2 20:08:09.036108 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 20:08:09.041192 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 20:08:09.071204 systemd[1]: Finished initrd-setup-root.service. Oct 2 20:08:09.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:09.072846 systemd[1]: Starting ignition-mount.service... Oct 2 20:08:09.074138 systemd[1]: Starting sysroot-boot.service... Oct 2 20:08:09.080125 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 20:08:09.092491 ignition[829]: INFO : Ignition 2.14.0 Oct 2 20:08:09.092491 ignition[829]: INFO : Stage: mount Oct 2 20:08:09.094731 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 20:08:09.094731 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 20:08:09.094731 ignition[829]: INFO : mount: mount passed Oct 2 20:08:09.094731 ignition[829]: INFO : Ignition finished successfully Oct 2 20:08:09.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:09.094942 systemd[1]: Finished ignition-mount.service. Oct 2 20:08:09.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:09.098605 systemd[1]: Finished sysroot-boot.service. Oct 2 20:08:09.621800 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:08:09.628247 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) Oct 2 20:08:09.630343 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 20:08:09.630363 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:08:09.630372 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:08:09.633776 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:08:09.635312 systemd[1]: Starting ignition-files.service... Oct 2 20:08:09.651097 ignition[857]: INFO : Ignition 2.14.0 Oct 2 20:08:09.651097 ignition[857]: INFO : Stage: files Oct 2 20:08:09.652750 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 20:08:09.652750 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 20:08:09.652750 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Oct 2 20:08:09.656005 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 20:08:09.656005 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 20:08:09.658903 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 20:08:09.658903 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 20:08:09.661983 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 20:08:09.661576 unknown[857]: wrote ssh authorized keys file for user: core Oct 2 20:08:09.665326 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 20:08:09.665326 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 20:08:09.825732 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 20:08:10.100296 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 20:08:10.100296 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 20:08:10.104977 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 20:08:10.104977 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Oct 2 20:08:10.175185 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 20:08:10.294977 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Oct 2 20:08:10.298120 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 20:08:10.298120 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:08:10.298120 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Oct 2 20:08:10.350413 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 20:08:10.635992 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Oct 2 20:08:10.635992 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:08:10.640413 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:08:10.640413 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Oct 2 20:08:10.678385 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 20:08:10.792574 systemd-networkd[731]: eth0: Gained IPv6LL Oct 2 20:08:11.406036 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Oct 2 20:08:11.409343 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:08:11.409343 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 20:08:11.409343 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 20:08:11.409343 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:08:11.409343 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 20:08:11.409343 ignition[857]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:08:11.441974 ignition[857]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:08:11.441974 ignition[857]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 20:08:11.441974 ignition[857]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 20:08:11.441974 ignition[857]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 20:08:11.441974 ignition[857]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 20:08:11.458071 ignition[857]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 20:08:11.460505 ignition[857]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 20:08:11.460505 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:08:11.460505 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:08:11.460505 ignition[857]: INFO : files: files passed Oct 2 20:08:11.460505 ignition[857]: INFO : Ignition finished successfully Oct 2 20:08:11.472237 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 20:08:11.472261 kernel: audit: type=1130 audit(1696277291.462:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.461468 systemd[1]: Finished ignition-files.service. Oct 2 20:08:11.463619 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 20:08:11.474440 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 20:08:11.468246 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 20:08:11.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.481408 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 20:08:11.488032 kernel: audit: type=1130 audit(1696277291.477:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.488061 kernel: audit: type=1130 audit(1696277291.481:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.488072 kernel: audit: type=1131 audit(1696277291.481:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.468982 systemd[1]: Starting ignition-quench.service... Oct 2 20:08:11.474902 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 20:08:11.477452 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 20:08:11.477546 systemd[1]: Finished ignition-quench.service. Oct 2 20:08:11.482173 systemd[1]: Reached target ignition-complete.target. Oct 2 20:08:11.489523 systemd[1]: Starting initrd-parse-etc.service... Oct 2 20:08:11.505025 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 20:08:11.505129 systemd[1]: Finished initrd-parse-etc.service. Oct 2 20:08:11.512432 kernel: audit: type=1130 audit(1696277291.506:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.512455 kernel: audit: type=1131 audit(1696277291.506:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.507027 systemd[1]: Reached target initrd-fs.target. Oct 2 20:08:11.513122 systemd[1]: Reached target initrd.target. Oct 2 20:08:11.514659 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 20:08:11.515392 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 20:08:11.532664 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 20:08:11.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.534463 systemd[1]: Starting initrd-cleanup.service... Oct 2 20:08:11.538442 kernel: audit: type=1130 audit(1696277291.533:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.545212 systemd[1]: Stopped target nss-lookup.target. Oct 2 20:08:11.546133 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 20:08:11.547882 systemd[1]: Stopped target timers.target. Oct 2 20:08:11.549433 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 20:08:11.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.549549 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 20:08:11.555480 kernel: audit: type=1131 audit(1696277291.550:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.550925 systemd[1]: Stopped target initrd.target. Oct 2 20:08:11.554876 systemd[1]: Stopped target basic.target. Oct 2 20:08:11.556303 systemd[1]: Stopped target ignition-complete.target. Oct 2 20:08:11.557733 systemd[1]: Stopped target ignition-diskful.target. Oct 2 20:08:11.559151 systemd[1]: Stopped target initrd-root-device.target. Oct 2 20:08:11.560872 systemd[1]: Stopped target remote-fs.target. Oct 2 20:08:11.562585 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 20:08:11.564090 systemd[1]: Stopped target sysinit.target. Oct 2 20:08:11.565567 systemd[1]: Stopped target local-fs.target. Oct 2 20:08:11.567084 systemd[1]: Stopped target local-fs-pre.target. Oct 2 20:08:11.568498 systemd[1]: Stopped target swap.target. Oct 2 20:08:11.574554 kernel: audit: type=1131 audit(1696277291.571:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.569824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 20:08:11.569967 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 20:08:11.580347 kernel: audit: type=1131 audit(1696277291.576:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.571368 systemd[1]: Stopped target cryptsetup.target. Oct 2 20:08:11.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.575394 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 20:08:11.575511 systemd[1]: Stopped dracut-initqueue.service. Oct 2 20:08:11.577260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 20:08:11.577364 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 20:08:11.581276 systemd[1]: Stopped target paths.target. Oct 2 20:08:11.582515 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 20:08:11.586249 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 20:08:11.588097 systemd[1]: Stopped target slices.target. Oct 2 20:08:11.589794 systemd[1]: Stopped target sockets.target. Oct 2 20:08:11.591270 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 20:08:11.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.591347 systemd[1]: Closed iscsid.socket. Oct 2 20:08:11.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.592482 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 20:08:11.592600 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 20:08:11.594028 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 20:08:11.594116 systemd[1]: Stopped ignition-files.service. Oct 2 20:08:11.596278 systemd[1]: Stopping ignition-mount.service... Oct 2 20:08:11.597836 systemd[1]: Stopping iscsiuio.service... Oct 2 20:08:11.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.600589 systemd[1]: Stopping sysroot-boot.service... Oct 2 20:08:11.601284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 20:08:11.601402 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 20:08:11.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.603004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 20:08:11.611312 ignition[897]: INFO : Ignition 2.14.0 Oct 2 20:08:11.611312 ignition[897]: INFO : Stage: umount Oct 2 20:08:11.611312 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 20:08:11.611312 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 20:08:11.611312 ignition[897]: INFO : umount: umount passed Oct 2 20:08:11.611312 ignition[897]: INFO : Ignition finished successfully Oct 2 20:08:11.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.603102 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 20:08:11.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.605821 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 20:08:11.605924 systemd[1]: Stopped iscsiuio.service. Oct 2 20:08:11.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.608138 systemd[1]: Stopped target network.target. Oct 2 20:08:11.609762 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 20:08:11.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.609808 systemd[1]: Closed iscsiuio.socket. Oct 2 20:08:11.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.612113 systemd[1]: Stopping systemd-networkd.service... Oct 2 20:08:11.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.613450 systemd[1]: Stopping systemd-resolved.service... Oct 2 20:08:11.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.616687 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 20:08:11.617206 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 20:08:11.617319 systemd[1]: Finished initrd-cleanup.service. Oct 2 20:08:11.620177 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 20:08:11.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.640000 audit: BPF prog-id=6 op=UNLOAD Oct 2 20:08:11.620282 systemd[1]: Stopped ignition-mount.service. Oct 2 20:08:11.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.623325 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 20:08:11.623392 systemd[1]: Stopped ignition-disks.service. Oct 2 20:08:11.626649 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 20:08:11.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.626696 systemd[1]: Stopped ignition-kargs.service. Oct 2 20:08:11.627280 systemd-networkd[731]: eth0: DHCPv6 lease lost Oct 2 20:08:11.628920 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 20:08:11.628960 systemd[1]: Stopped ignition-setup.service. Oct 2 20:08:11.630717 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 20:08:11.630833 systemd[1]: Stopped systemd-resolved.service. Oct 2 20:08:11.633252 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 20:08:11.654000 audit: BPF prog-id=9 op=UNLOAD Oct 2 20:08:11.633340 systemd[1]: Stopped systemd-networkd.service. Oct 2 20:08:11.634609 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 20:08:11.634637 systemd[1]: Closed systemd-networkd.socket. Oct 2 20:08:11.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.637426 systemd[1]: Stopping network-cleanup.service... Oct 2 20:08:11.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.638971 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 20:08:11.639036 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 20:08:11.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.640525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 20:08:11.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.640580 systemd[1]: Stopped systemd-sysctl.service. Oct 2 20:08:11.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.643789 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 20:08:11.643848 systemd[1]: Stopped systemd-modules-load.service. Oct 2 20:08:11.645875 systemd[1]: Stopping systemd-udevd.service... Oct 2 20:08:11.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.650676 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 20:08:11.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.655405 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 20:08:11.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.655527 systemd[1]: Stopped network-cleanup.service. Oct 2 20:08:11.658455 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 20:08:11.658588 systemd[1]: Stopped systemd-udevd.service. Oct 2 20:08:11.659829 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 20:08:11.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.659866 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 20:08:11.662068 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 20:08:11.662106 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 20:08:11.664073 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 20:08:11.664128 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 20:08:11.665107 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 20:08:11.665148 systemd[1]: Stopped dracut-cmdline.service. Oct 2 20:08:11.666625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 20:08:11.666669 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 20:08:11.669704 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 20:08:11.671144 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 20:08:11.671238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 20:08:11.673481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 20:08:11.673524 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 20:08:11.674415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 20:08:11.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.674455 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 20:08:11.677341 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 20:08:11.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:11.677839 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 20:08:11.677928 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 20:08:11.695974 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 20:08:11.696069 systemd[1]: Stopped sysroot-boot.service. Oct 2 20:08:11.697641 systemd[1]: Reached target initrd-switch-root.target. Oct 2 20:08:11.699089 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 20:08:11.699146 systemd[1]: Stopped initrd-setup-root.service. Oct 2 20:08:11.701389 systemd[1]: Starting initrd-switch-root.service... Oct 2 20:08:11.709481 systemd[1]: Switching root. Oct 2 20:08:11.727135 iscsid[739]: iscsid shutting down. Oct 2 20:08:11.727845 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Oct 2 20:08:11.727892 systemd-journald[290]: Journal stopped Oct 2 20:08:13.890585 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 20:08:13.890673 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 20:08:13.890685 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 20:08:13.890705 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 20:08:13.890715 kernel: SELinux: policy capability open_perms=1 Oct 2 20:08:13.890725 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 20:08:13.890748 kernel: SELinux: policy capability always_check_network=0 Oct 2 20:08:13.890757 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 20:08:13.890767 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 20:08:13.890786 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 20:08:13.890795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 20:08:13.890808 systemd[1]: Successfully loaded SELinux policy in 35.247ms. Oct 2 20:08:13.890838 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.855ms. Oct 2 20:08:13.890850 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:08:13.890862 systemd[1]: Detected virtualization kvm. Oct 2 20:08:13.890874 systemd[1]: Detected architecture arm64. Oct 2 20:08:13.890884 systemd[1]: Detected first boot. Oct 2 20:08:13.890894 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:08:13.890904 systemd[1]: Populated /etc with preset unit settings. Oct 2 20:08:13.890915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:08:13.890927 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:08:13.890938 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:08:13.890951 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 20:08:13.890962 systemd[1]: Stopped iscsid.service. Oct 2 20:08:13.890972 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 20:08:13.890982 systemd[1]: Stopped initrd-switch-root.service. Oct 2 20:08:13.890992 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 20:08:13.891003 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 20:08:13.891013 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 20:08:13.891025 systemd[1]: Created slice system-getty.slice. Oct 2 20:08:13.891035 systemd[1]: Created slice system-modprobe.slice. Oct 2 20:08:13.891045 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 20:08:13.891057 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 20:08:13.891067 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 20:08:13.891077 systemd[1]: Created slice user.slice. Oct 2 20:08:13.891087 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:08:13.891107 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 20:08:13.891118 systemd[1]: Set up automount boot.automount. Oct 2 20:08:13.891130 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 20:08:13.891140 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 20:08:13.891151 systemd[1]: Stopped target initrd-fs.target. Oct 2 20:08:13.891162 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 20:08:13.891172 systemd[1]: Reached target integritysetup.target. Oct 2 20:08:13.891183 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:08:13.891193 systemd[1]: Reached target remote-fs.target. Oct 2 20:08:13.891204 systemd[1]: Reached target slices.target. Oct 2 20:08:13.891215 systemd[1]: Reached target swap.target. Oct 2 20:08:13.891256 systemd[1]: Reached target torcx.target. Oct 2 20:08:13.891270 systemd[1]: Reached target veritysetup.target. Oct 2 20:08:13.891281 systemd[1]: Listening on systemd-coredump.socket. Oct 2 20:08:13.891291 systemd[1]: Listening on systemd-initctl.socket. Oct 2 20:08:13.891302 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:08:13.891312 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:08:13.891324 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:08:13.891335 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 20:08:13.891347 systemd[1]: Mounting dev-hugepages.mount... Oct 2 20:08:13.891358 systemd[1]: Mounting dev-mqueue.mount... Oct 2 20:08:13.891368 systemd[1]: Mounting media.mount... Oct 2 20:08:13.891378 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 20:08:13.891388 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 20:08:13.891399 systemd[1]: Mounting tmp.mount... Oct 2 20:08:13.891409 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 20:08:13.891420 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 20:08:13.891430 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:08:13.891441 systemd[1]: Starting modprobe@configfs.service... Oct 2 20:08:13.891453 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 20:08:13.891463 systemd[1]: Starting modprobe@drm.service... Oct 2 20:08:13.891474 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 20:08:13.891488 systemd[1]: Starting modprobe@fuse.service... Oct 2 20:08:13.891500 systemd[1]: Starting modprobe@loop.service... Oct 2 20:08:13.891511 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 20:08:13.891521 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 20:08:13.891532 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 20:08:13.891544 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 20:08:13.891555 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 20:08:13.891572 systemd[1]: Stopped systemd-journald.service. Oct 2 20:08:13.891584 kernel: fuse: init (API version 7.34) Oct 2 20:08:13.891594 systemd[1]: Starting systemd-journald.service... Oct 2 20:08:13.891605 kernel: loop: module loaded Oct 2 20:08:13.891615 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:08:13.891625 systemd[1]: Starting systemd-network-generator.service... Oct 2 20:08:13.891636 systemd[1]: Starting systemd-remount-fs.service... Oct 2 20:08:13.891646 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:08:13.891660 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 20:08:13.891670 systemd[1]: Stopped verity-setup.service. Oct 2 20:08:13.891681 systemd[1]: Mounted dev-hugepages.mount. Oct 2 20:08:13.891691 systemd[1]: Mounted dev-mqueue.mount. Oct 2 20:08:13.891701 systemd[1]: Mounted media.mount. Oct 2 20:08:13.891711 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 20:08:13.891721 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 20:08:13.891732 systemd[1]: Mounted tmp.mount. Oct 2 20:08:13.891742 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:08:13.891755 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 20:08:13.891765 systemd[1]: Finished modprobe@configfs.service. Oct 2 20:08:13.891776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 20:08:13.891790 systemd-journald[992]: Journal started Oct 2 20:08:13.891834 systemd-journald[992]: Runtime Journal (/run/log/journal/0dce29da468647c7ac695fb56119b409) is 6.0M, max 48.7M, 42.6M free. Oct 2 20:08:11.808000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:08:11.972000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:08:11.972000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:08:11.972000 audit: BPF prog-id=10 op=LOAD Oct 2 20:08:11.972000 audit: BPF prog-id=10 op=UNLOAD Oct 2 20:08:11.972000 audit: BPF prog-id=11 op=LOAD Oct 2 20:08:11.972000 audit: BPF prog-id=11 op=UNLOAD Oct 2 20:08:13.723000 audit: BPF prog-id=12 op=LOAD Oct 2 20:08:13.723000 audit: BPF prog-id=3 op=UNLOAD Oct 2 20:08:13.723000 audit: BPF prog-id=13 op=LOAD Oct 2 20:08:13.723000 audit: BPF prog-id=14 op=LOAD Oct 2 20:08:13.723000 audit: BPF prog-id=4 op=UNLOAD Oct 2 20:08:13.723000 audit: BPF prog-id=5 op=UNLOAD Oct 2 20:08:13.724000 audit: BPF prog-id=15 op=LOAD Oct 2 20:08:13.724000 audit: BPF prog-id=12 op=UNLOAD Oct 2 20:08:13.724000 audit: BPF prog-id=16 op=LOAD Oct 2 20:08:13.724000 audit: BPF prog-id=17 op=LOAD Oct 2 20:08:13.724000 audit: BPF prog-id=13 op=UNLOAD Oct 2 20:08:13.724000 audit: BPF prog-id=14 op=UNLOAD Oct 2 20:08:13.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.736000 audit: BPF prog-id=15 op=UNLOAD Oct 2 20:08:13.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.853000 audit: BPF prog-id=18 op=LOAD Oct 2 20:08:13.855000 audit: BPF prog-id=19 op=LOAD Oct 2 20:08:13.855000 audit: BPF prog-id=20 op=LOAD Oct 2 20:08:13.855000 audit: BPF prog-id=16 op=UNLOAD Oct 2 20:08:13.855000 audit: BPF prog-id=17 op=UNLOAD Oct 2 20:08:13.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.889000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:08:13.889000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffe2abae0 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:13.889000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 20:08:13.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:12.014985 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:08:13.722308 systemd[1]: Queued start job for default target multi-user.target. Oct 2 20:08:12.015529 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:08:13.722319 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 20:08:12.015555 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:08:13.725634 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 20:08:12.015595 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 20:08:12.015605 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 20:08:12.015635 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 20:08:12.015650 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 20:08:13.893722 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 20:08:12.015847 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 20:08:12.015880 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:08:12.015892 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:08:13.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:12.016314 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 20:08:12.016353 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 20:08:12.016371 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 20:08:12.016385 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 20:08:12.016401 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 20:08:12.016413 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 20:08:13.457626 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:13Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:08:13.457888 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:13Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:08:13.458032 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:13Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:08:13.458193 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:13Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:08:13.458307 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:13Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 20:08:13.458380 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2023-10-02T20:08:13Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 20:08:13.895397 systemd[1]: Started systemd-journald.service. Oct 2 20:08:13.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.896093 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 20:08:13.896523 systemd[1]: Finished modprobe@drm.service. Oct 2 20:08:13.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.897641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 20:08:13.897793 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 20:08:13.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.898867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 20:08:13.899027 systemd[1]: Finished modprobe@fuse.service. Oct 2 20:08:13.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.900065 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 20:08:13.900209 systemd[1]: Finished modprobe@loop.service. Oct 2 20:08:13.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.901890 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:08:13.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.903087 systemd[1]: Finished systemd-network-generator.service. Oct 2 20:08:13.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.904317 systemd[1]: Finished systemd-remount-fs.service. Oct 2 20:08:13.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.905551 systemd[1]: Reached target network-pre.target. Oct 2 20:08:13.907502 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 20:08:13.909552 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 20:08:13.910303 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 20:08:13.912170 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 20:08:13.914423 systemd[1]: Starting systemd-journal-flush.service... Oct 2 20:08:13.915333 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 20:08:13.916537 systemd[1]: Starting systemd-random-seed.service... Oct 2 20:08:13.917371 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 20:08:13.918530 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:08:13.924404 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 20:08:13.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.925535 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 20:08:13.926061 systemd-journald[992]: Time spent on flushing to /var/log/journal/0dce29da468647c7ac695fb56119b409 is 13.880ms for 987 entries. Oct 2 20:08:13.926061 systemd-journald[992]: System Journal (/var/log/journal/0dce29da468647c7ac695fb56119b409) is 8.0M, max 195.6M, 187.6M free. Oct 2 20:08:13.948362 systemd-journald[992]: Received client request to flush runtime journal. Oct 2 20:08:13.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.927508 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 20:08:13.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.929913 systemd[1]: Starting systemd-sysusers.service... Oct 2 20:08:13.948848 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 20:08:13.931218 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:08:13.934405 systemd[1]: Starting systemd-udev-settle.service... Oct 2 20:08:13.942073 systemd[1]: Finished systemd-random-seed.service. Oct 2 20:08:13.943319 systemd[1]: Reached target first-boot-complete.target. Oct 2 20:08:13.947804 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:08:13.949747 systemd[1]: Finished systemd-journal-flush.service. Oct 2 20:08:13.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.950896 systemd[1]: Finished systemd-sysusers.service. Oct 2 20:08:13.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:13.952832 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:08:13.969240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:08:13.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.289253 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 20:08:14.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.290000 audit: BPF prog-id=21 op=LOAD Oct 2 20:08:14.290000 audit: BPF prog-id=22 op=LOAD Oct 2 20:08:14.290000 audit: BPF prog-id=7 op=UNLOAD Oct 2 20:08:14.290000 audit: BPF prog-id=8 op=UNLOAD Oct 2 20:08:14.291518 systemd[1]: Starting systemd-udevd.service... Oct 2 20:08:14.310879 systemd-udevd[1037]: Using default interface naming scheme 'v252'. Oct 2 20:08:14.322713 systemd[1]: Started systemd-udevd.service. Oct 2 20:08:14.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.324000 audit: BPF prog-id=23 op=LOAD Oct 2 20:08:14.326807 systemd[1]: Starting systemd-networkd.service... Oct 2 20:08:14.332000 audit: BPF prog-id=24 op=LOAD Oct 2 20:08:14.332000 audit: BPF prog-id=25 op=LOAD Oct 2 20:08:14.332000 audit: BPF prog-id=26 op=LOAD Oct 2 20:08:14.333616 systemd[1]: Starting systemd-userdbd.service... Oct 2 20:08:14.350065 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 20:08:14.364964 systemd[1]: Started systemd-userdbd.service. Oct 2 20:08:14.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.381065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:08:14.419794 systemd-networkd[1046]: lo: Link UP Oct 2 20:08:14.419801 systemd-networkd[1046]: lo: Gained carrier Oct 2 20:08:14.420123 systemd-networkd[1046]: Enumeration completed Oct 2 20:08:14.420212 systemd[1]: Started systemd-networkd.service. Oct 2 20:08:14.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.421316 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:08:14.422544 systemd-networkd[1046]: eth0: Link UP Oct 2 20:08:14.422552 systemd-networkd[1046]: eth0: Gained carrier Oct 2 20:08:14.434192 systemd[1]: Finished systemd-udev-settle.service. Oct 2 20:08:14.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.436338 systemd[1]: Starting lvm2-activation-early.service... Oct 2 20:08:14.445400 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 20:08:14.448659 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:08:14.479088 systemd[1]: Finished lvm2-activation-early.service. Oct 2 20:08:14.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.480187 systemd[1]: Reached target cryptsetup.target. Oct 2 20:08:14.482210 systemd[1]: Starting lvm2-activation.service... Oct 2 20:08:14.486423 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:08:14.518127 systemd[1]: Finished lvm2-activation.service. Oct 2 20:08:14.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.519104 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:08:14.519981 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 20:08:14.520013 systemd[1]: Reached target local-fs.target. Oct 2 20:08:14.520811 systemd[1]: Reached target machines.target. Oct 2 20:08:14.522747 systemd[1]: Starting ldconfig.service... Oct 2 20:08:14.523771 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 20:08:14.523824 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:08:14.524908 systemd[1]: Starting systemd-boot-update.service... Oct 2 20:08:14.526702 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 20:08:14.528847 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 20:08:14.530773 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:08:14.530840 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:08:14.532289 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 20:08:14.536053 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) Oct 2 20:08:14.537319 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 20:08:14.549408 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 20:08:14.550283 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 20:08:14.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.561878 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 20:08:14.625043 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 20:08:14.632446 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 20:08:14.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.664146 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Oct 2 20:08:14.664146 systemd-fsck[1081]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 20:08:14.668860 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 20:08:14.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.723649 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 20:08:14.726502 systemd[1]: Finished ldconfig.service. Oct 2 20:08:14.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.877359 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 20:08:14.878730 systemd[1]: Mounting boot.mount... Oct 2 20:08:14.886030 systemd[1]: Mounted boot.mount. Oct 2 20:08:14.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.893541 systemd[1]: Finished systemd-boot-update.service. Oct 2 20:08:14.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.943192 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 20:08:14.948533 systemd[1]: Starting audit-rules.service... Oct 2 20:08:14.950289 systemd[1]: Starting clean-ca-certificates.service... Oct 2 20:08:14.952331 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 20:08:14.954000 audit: BPF prog-id=27 op=LOAD Oct 2 20:08:14.955271 systemd[1]: Starting systemd-resolved.service... Oct 2 20:08:14.957000 audit: BPF prog-id=28 op=LOAD Oct 2 20:08:14.958790 systemd[1]: Starting systemd-timesyncd.service... Oct 2 20:08:14.960460 systemd[1]: Starting systemd-update-utmp.service... Oct 2 20:08:14.961728 systemd[1]: Finished clean-ca-certificates.service. Oct 2 20:08:14.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.962904 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 20:08:14.971000 audit[1095]: SYSTEM_BOOT pid=1095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.973123 systemd[1]: Finished systemd-update-utmp.service. Oct 2 20:08:14.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.978166 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 20:08:14.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.980213 systemd[1]: Starting systemd-update-done.service... Oct 2 20:08:14.988865 systemd[1]: Finished systemd-update-done.service. Oct 2 20:08:14.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:14.997000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:08:14.997000 audit[1106]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffe8ef580 a2=420 a3=0 items=0 ppid=1084 pid=1106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:14.997000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:08:14.998172 augenrules[1106]: No rules Oct 2 20:08:14.999078 systemd[1]: Finished audit-rules.service. Oct 2 20:08:15.006648 systemd[1]: Started systemd-timesyncd.service. Oct 2 20:08:15.007824 systemd[1]: Reached target time-set.target. Oct 2 20:08:15.008973 systemd-timesyncd[1094]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 20:08:15.009028 systemd-timesyncd[1094]: Initial clock synchronization to Mon 2023-10-02 20:08:14.843235 UTC. Oct 2 20:08:15.009702 systemd-resolved[1088]: Positive Trust Anchors: Oct 2 20:08:15.009713 systemd-resolved[1088]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:08:15.009743 systemd-resolved[1088]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:08:15.019933 systemd-resolved[1088]: Defaulting to hostname 'linux'. Oct 2 20:08:15.021355 systemd[1]: Started systemd-resolved.service. Oct 2 20:08:15.022209 systemd[1]: Reached target network.target. Oct 2 20:08:15.022979 systemd[1]: Reached target nss-lookup.target. Oct 2 20:08:15.023809 systemd[1]: Reached target sysinit.target. Oct 2 20:08:15.024672 systemd[1]: Started motdgen.path. Oct 2 20:08:15.025393 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 20:08:15.026612 systemd[1]: Started logrotate.timer. Oct 2 20:08:15.027486 systemd[1]: Started mdadm.timer. Oct 2 20:08:15.028166 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 20:08:15.029038 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 20:08:15.029069 systemd[1]: Reached target paths.target. Oct 2 20:08:15.029835 systemd[1]: Reached target timers.target. Oct 2 20:08:15.030919 systemd[1]: Listening on dbus.socket. Oct 2 20:08:15.032713 systemd[1]: Starting docker.socket... Oct 2 20:08:15.035996 systemd[1]: Listening on sshd.socket. Oct 2 20:08:15.036842 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:08:15.037272 systemd[1]: Listening on docker.socket. Oct 2 20:08:15.038091 systemd[1]: Reached target sockets.target. Oct 2 20:08:15.038882 systemd[1]: Reached target basic.target. Oct 2 20:08:15.039677 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:08:15.039709 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:08:15.040755 systemd[1]: Starting containerd.service... Oct 2 20:08:15.042482 systemd[1]: Starting dbus.service... Oct 2 20:08:15.044086 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 20:08:15.046010 systemd[1]: Starting extend-filesystems.service... Oct 2 20:08:15.046919 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 20:08:15.048143 systemd[1]: Starting motdgen.service... Oct 2 20:08:15.049940 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 20:08:15.053401 systemd[1]: Starting prepare-critools.service... Oct 2 20:08:15.055380 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 20:08:15.057352 systemd[1]: Starting sshd-keygen.service... Oct 2 20:08:15.057531 jq[1116]: false Oct 2 20:08:15.061310 systemd[1]: Starting systemd-logind.service... Oct 2 20:08:15.062110 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:08:15.062213 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 20:08:15.062708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 20:08:15.063389 systemd[1]: Starting update-engine.service... Oct 2 20:08:15.065294 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 20:08:15.070945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 20:08:15.071136 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 20:08:15.072885 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 20:08:15.073060 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 20:08:15.076246 jq[1132]: true Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda1 Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda2 Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda3 Oct 2 20:08:15.084354 extend-filesystems[1117]: Found usr Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda4 Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda6 Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda7 Oct 2 20:08:15.084354 extend-filesystems[1117]: Found vda9 Oct 2 20:08:15.084354 extend-filesystems[1117]: Checking size of /dev/vda9 Oct 2 20:08:15.096240 jq[1141]: true Oct 2 20:08:15.097323 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 20:08:15.097524 systemd[1]: Finished motdgen.service. Oct 2 20:08:15.125414 tar[1137]: ./ Oct 2 20:08:15.125414 tar[1137]: ./macvlan Oct 2 20:08:15.125772 tar[1138]: crictl Oct 2 20:08:15.109395 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 20:08:15.125971 extend-filesystems[1117]: Old size kept for /dev/vda9 Oct 2 20:08:15.109570 systemd[1]: Finished extend-filesystems.service. Oct 2 20:08:15.131274 dbus-daemon[1115]: [system] SELinux support is enabled Oct 2 20:08:15.132492 systemd[1]: Started dbus.service. Oct 2 20:08:15.138156 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 20:08:15.138185 systemd[1]: Reached target system-config.target. Oct 2 20:08:15.139149 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 20:08:15.139196 systemd[1]: Reached target user-config.target. Oct 2 20:08:15.148341 bash[1165]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:08:15.149109 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 20:08:15.159511 systemd-logind[1128]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 20:08:15.160079 systemd-logind[1128]: New seat seat0. Oct 2 20:08:15.163639 systemd[1]: Started systemd-logind.service. Oct 2 20:08:15.183334 tar[1137]: ./static Oct 2 20:08:15.184309 update_engine[1130]: I1002 20:08:15.181212 1130 main.cc:92] Flatcar Update Engine starting Oct 2 20:08:15.190341 systemd[1]: Started update-engine.service. Oct 2 20:08:15.193027 systemd[1]: Started locksmithd.service. Oct 2 20:08:15.194209 update_engine[1130]: I1002 20:08:15.194177 1130 update_check_scheduler.cc:74] Next update check in 10m26s Oct 2 20:08:15.212966 tar[1137]: ./vlan Oct 2 20:08:15.219247 env[1140]: time="2023-10-02T20:08:15.219161240Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 20:08:15.243739 tar[1137]: ./portmap Oct 2 20:08:15.269418 env[1140]: time="2023-10-02T20:08:15.269349000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 20:08:15.269640 env[1140]: time="2023-10-02T20:08:15.269605080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:08:15.276574 env[1140]: time="2023-10-02T20:08:15.276526240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:08:15.276574 env[1140]: time="2023-10-02T20:08:15.276570240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:08:15.276817 env[1140]: time="2023-10-02T20:08:15.276790040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:08:15.276817 env[1140]: time="2023-10-02T20:08:15.276813880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 20:08:15.276887 env[1140]: time="2023-10-02T20:08:15.276827040Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 20:08:15.276887 env[1140]: time="2023-10-02T20:08:15.276836400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 20:08:15.276926 env[1140]: time="2023-10-02T20:08:15.276907680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:08:15.277159 env[1140]: time="2023-10-02T20:08:15.277136400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:08:15.277293 env[1140]: time="2023-10-02T20:08:15.277271600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:08:15.277329 env[1140]: time="2023-10-02T20:08:15.277292400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 20:08:15.277361 env[1140]: time="2023-10-02T20:08:15.277345040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 20:08:15.277361 env[1140]: time="2023-10-02T20:08:15.277358040Z" level=info msg="metadata content store policy set" policy=shared Oct 2 20:08:15.281210 env[1140]: time="2023-10-02T20:08:15.281181360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 20:08:15.281298 env[1140]: time="2023-10-02T20:08:15.281213360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 20:08:15.281298 env[1140]: time="2023-10-02T20:08:15.281235280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 20:08:15.281298 env[1140]: time="2023-10-02T20:08:15.281271360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281298 env[1140]: time="2023-10-02T20:08:15.281286280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281371 env[1140]: time="2023-10-02T20:08:15.281300400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281371 env[1140]: time="2023-10-02T20:08:15.281313400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281700 env[1140]: time="2023-10-02T20:08:15.281663320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281700 env[1140]: time="2023-10-02T20:08:15.281692080Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281748 env[1140]: time="2023-10-02T20:08:15.281707840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281748 env[1140]: time="2023-10-02T20:08:15.281720400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.281748 env[1140]: time="2023-10-02T20:08:15.281733960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 20:08:15.281864 env[1140]: time="2023-10-02T20:08:15.281841240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 20:08:15.281940 env[1140]: time="2023-10-02T20:08:15.281921400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 20:08:15.282167 env[1140]: time="2023-10-02T20:08:15.282145440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 20:08:15.282191 env[1140]: time="2023-10-02T20:08:15.282176480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282215 env[1140]: time="2023-10-02T20:08:15.282190040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 20:08:15.282331 env[1140]: time="2023-10-02T20:08:15.282316720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282354 env[1140]: time="2023-10-02T20:08:15.282336120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282354 env[1140]: time="2023-10-02T20:08:15.282349120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282397 env[1140]: time="2023-10-02T20:08:15.282361000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282397 env[1140]: time="2023-10-02T20:08:15.282373080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282397 env[1140]: time="2023-10-02T20:08:15.282385280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282397 env[1140]: time="2023-10-02T20:08:15.282396080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282474 env[1140]: time="2023-10-02T20:08:15.282407680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282474 env[1140]: time="2023-10-02T20:08:15.282420440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 20:08:15.282551 env[1140]: time="2023-10-02T20:08:15.282534800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282581 env[1140]: time="2023-10-02T20:08:15.282556240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282581 env[1140]: time="2023-10-02T20:08:15.282578480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282619 env[1140]: time="2023-10-02T20:08:15.282590160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 20:08:15.282619 env[1140]: time="2023-10-02T20:08:15.282604520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 20:08:15.282619 env[1140]: time="2023-10-02T20:08:15.282615440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 20:08:15.282677 env[1140]: time="2023-10-02T20:08:15.282632720Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 20:08:15.282677 env[1140]: time="2023-10-02T20:08:15.282665760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 20:08:15.282918 env[1140]: time="2023-10-02T20:08:15.282862200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 20:08:15.283526 env[1140]: time="2023-10-02T20:08:15.282920600Z" level=info msg="Connect containerd service" Oct 2 20:08:15.283526 env[1140]: time="2023-10-02T20:08:15.282956360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 20:08:15.283606 env[1140]: time="2023-10-02T20:08:15.283540240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.283847960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.283890280Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.283936360Z" level=info msg="containerd successfully booted in 0.065770s" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.283982320Z" level=info msg="Start subscribing containerd event" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.284028720Z" level=info msg="Start recovering state" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.284084640Z" level=info msg="Start event monitor" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.284114240Z" level=info msg="Start snapshots syncer" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.284125320Z" level=info msg="Start cni network conf syncer for default" Oct 2 20:08:15.284219 env[1140]: time="2023-10-02T20:08:15.284132600Z" level=info msg="Start streaming server" Oct 2 20:08:15.284010 systemd[1]: Started containerd.service. Oct 2 20:08:15.292313 tar[1137]: ./host-local Oct 2 20:08:15.317575 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 20:08:15.328975 tar[1137]: ./vrf Oct 2 20:08:15.365749 tar[1137]: ./bridge Oct 2 20:08:15.407886 tar[1137]: ./tuning Oct 2 20:08:15.440251 tar[1137]: ./firewall Oct 2 20:08:15.475518 tar[1137]: ./host-device Oct 2 20:08:15.506032 systemd[1]: Finished prepare-critools.service. Oct 2 20:08:15.508033 tar[1137]: ./sbr Oct 2 20:08:15.535946 tar[1137]: ./loopback Oct 2 20:08:15.563134 tar[1137]: ./dhcp Oct 2 20:08:15.627717 tar[1137]: ./ptp Oct 2 20:08:15.655610 tar[1137]: ./ipvlan Oct 2 20:08:15.682717 tar[1137]: ./bandwidth Oct 2 20:08:15.717942 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 20:08:16.360324 systemd-networkd[1046]: eth0: Gained IPv6LL Oct 2 20:08:17.642545 sshd_keygen[1139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 20:08:17.662694 systemd[1]: Finished sshd-keygen.service. Oct 2 20:08:17.664966 systemd[1]: Starting issuegen.service... Oct 2 20:08:17.670507 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 20:08:17.670660 systemd[1]: Finished issuegen.service. Oct 2 20:08:17.672893 systemd[1]: Starting systemd-user-sessions.service... Oct 2 20:08:17.679682 systemd[1]: Finished systemd-user-sessions.service. Oct 2 20:08:17.682109 systemd[1]: Started getty@tty1.service. Oct 2 20:08:17.684408 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 20:08:17.685464 systemd[1]: Reached target getty.target. Oct 2 20:08:17.686319 systemd[1]: Reached target multi-user.target. Oct 2 20:08:17.688406 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 20:08:17.695544 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 20:08:17.695700 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 20:08:17.696808 systemd[1]: Startup finished in 615ms (kernel) + 5.189s (initrd) + 5.931s (userspace) = 11.735s. Oct 2 20:08:19.570496 systemd[1]: Created slice system-sshd.slice. Oct 2 20:08:19.571680 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:45162.service. Oct 2 20:08:19.624888 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 45162 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:19.627319 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:19.650135 systemd[1]: Created slice user-500.slice. Oct 2 20:08:19.651338 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 20:08:19.653339 systemd-logind[1128]: New session 1 of user core. Oct 2 20:08:19.660417 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 20:08:19.661810 systemd[1]: Starting user@500.service... Oct 2 20:08:19.665446 (systemd)[1200]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:19.727973 systemd[1200]: Queued start job for default target default.target. Oct 2 20:08:19.728491 systemd[1200]: Reached target paths.target. Oct 2 20:08:19.728510 systemd[1200]: Reached target sockets.target. Oct 2 20:08:19.728520 systemd[1200]: Reached target timers.target. Oct 2 20:08:19.728529 systemd[1200]: Reached target basic.target. Oct 2 20:08:19.728589 systemd[1200]: Reached target default.target. Oct 2 20:08:19.728622 systemd[1200]: Startup finished in 56ms. Oct 2 20:08:19.728806 systemd[1]: Started user@500.service. Oct 2 20:08:19.730951 systemd[1]: Started session-1.scope. Oct 2 20:08:19.784027 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:45178.service. Oct 2 20:08:19.843586 sshd[1209]: Accepted publickey for core from 10.0.0.1 port 45178 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:19.844808 sshd[1209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:19.848269 systemd-logind[1128]: New session 2 of user core. Oct 2 20:08:19.848923 systemd[1]: Started session-2.scope. Oct 2 20:08:19.905275 sshd[1209]: pam_unix(sshd:session): session closed for user core Oct 2 20:08:19.907995 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:45178.service: Deactivated successfully. Oct 2 20:08:19.908613 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 20:08:19.909109 systemd-logind[1128]: Session 2 logged out. Waiting for processes to exit. Oct 2 20:08:19.910479 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:45194.service. Oct 2 20:08:19.911063 systemd-logind[1128]: Removed session 2. Oct 2 20:08:19.954722 sshd[1215]: Accepted publickey for core from 10.0.0.1 port 45194 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:19.956389 sshd[1215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:19.959608 systemd-logind[1128]: New session 3 of user core. Oct 2 20:08:19.960479 systemd[1]: Started session-3.scope. Oct 2 20:08:20.010942 sshd[1215]: pam_unix(sshd:session): session closed for user core Oct 2 20:08:20.013551 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:45194.service: Deactivated successfully. Oct 2 20:08:20.014088 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 20:08:20.014592 systemd-logind[1128]: Session 3 logged out. Waiting for processes to exit. Oct 2 20:08:20.015604 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:45204.service. Oct 2 20:08:20.016276 systemd-logind[1128]: Removed session 3. Oct 2 20:08:20.059710 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 45204 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:20.060977 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:20.064036 systemd-logind[1128]: New session 4 of user core. Oct 2 20:08:20.064839 systemd[1]: Started session-4.scope. Oct 2 20:08:20.118039 sshd[1221]: pam_unix(sshd:session): session closed for user core Oct 2 20:08:20.122457 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:45204.service: Deactivated successfully. Oct 2 20:08:20.123270 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 20:08:20.123960 systemd-logind[1128]: Session 4 logged out. Waiting for processes to exit. Oct 2 20:08:20.125392 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:45218.service. Oct 2 20:08:20.126409 systemd-logind[1128]: Removed session 4. Oct 2 20:08:20.169005 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 45218 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:20.170151 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:20.173417 systemd-logind[1128]: New session 5 of user core. Oct 2 20:08:20.175344 systemd[1]: Started session-5.scope. Oct 2 20:08:20.235319 sudo[1230]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 20:08:20.235527 sudo[1230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:08:20.249391 dbus-daemon[1115]: avc: received setenforce notice (enforcing=1) Oct 2 20:08:20.250361 sudo[1230]: pam_unix(sudo:session): session closed for user root Oct 2 20:08:20.252517 sshd[1227]: pam_unix(sshd:session): session closed for user core Oct 2 20:08:20.255456 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:45218.service: Deactivated successfully. Oct 2 20:08:20.256062 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 20:08:20.256581 systemd-logind[1128]: Session 5 logged out. Waiting for processes to exit. Oct 2 20:08:20.257665 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:45230.service. Oct 2 20:08:20.258184 systemd-logind[1128]: Removed session 5. Oct 2 20:08:20.303684 sshd[1234]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:20.305046 sshd[1234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:20.308920 systemd-logind[1128]: New session 6 of user core. Oct 2 20:08:20.309356 systemd[1]: Started session-6.scope. Oct 2 20:08:20.364001 sudo[1238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 20:08:20.364201 sudo[1238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:08:20.367080 sudo[1238]: pam_unix(sudo:session): session closed for user root Oct 2 20:08:20.371873 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 20:08:20.372072 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:08:20.381242 systemd[1]: Stopping audit-rules.service... Oct 2 20:08:20.381000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:08:20.383390 kernel: kauditd_printk_skb: 123 callbacks suppressed Oct 2 20:08:20.383436 kernel: audit: type=1305 audit(1696277300.381:163): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:08:20.383701 auditctl[1241]: No rules Oct 2 20:08:20.383907 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 20:08:20.384070 systemd[1]: Stopped audit-rules.service. Oct 2 20:08:20.381000 audit[1241]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd0b23710 a2=420 a3=0 items=0 ppid=1 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:20.385511 systemd[1]: Starting audit-rules.service... Oct 2 20:08:20.388506 kernel: audit: type=1300 audit(1696277300.381:163): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd0b23710 a2=420 a3=0 items=0 ppid=1 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:20.388548 kernel: audit: type=1327 audit(1696277300.381:163): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:08:20.381000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:08:20.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.392007 kernel: audit: type=1131 audit(1696277300.382:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.404114 augenrules[1258]: No rules Oct 2 20:08:20.405168 systemd[1]: Finished audit-rules.service. Oct 2 20:08:20.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.406333 sudo[1237]: pam_unix(sudo:session): session closed for user root Oct 2 20:08:20.406000 audit[1237]: USER_END pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.411031 kernel: audit: type=1130 audit(1696277300.405:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.411086 kernel: audit: type=1106 audit(1696277300.406:166): pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.411274 sshd[1234]: pam_unix(sshd:session): session closed for user core Oct 2 20:08:20.406000 audit[1237]: CRED_DISP pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.414003 kernel: audit: type=1104 audit(1696277300.406:167): pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.414000 audit[1234]: USER_END pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.414831 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:45240.service. Oct 2 20:08:20.418927 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:45230.service: Deactivated successfully. Oct 2 20:08:20.414000 audit[1234]: CRED_DISP pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.419528 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 20:08:20.421935 kernel: audit: type=1106 audit(1696277300.414:168): pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.421986 kernel: audit: type=1104 audit(1696277300.414:169): pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.422003 kernel: audit: type=1130 audit(1696277300.414:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.422113 systemd-logind[1128]: Session 6 logged out. Waiting for processes to exit. Oct 2 20:08:20.422883 systemd-logind[1128]: Removed session 6. Oct 2 20:08:20.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.13:22-10.0.0.1:45230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.458000 audit[1263]: USER_ACCT pid=1263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.459554 sshd[1263]: Accepted publickey for core from 10.0.0.1 port 45240 ssh2: RSA SHA256:JAs1apcNutvDkqjmOZG93AHJl+jbIx12KdulabYDxP4 Oct 2 20:08:20.459000 audit[1263]: CRED_ACQ pid=1263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.459000 audit[1263]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb0129e0 a2=3 a3=1 items=0 ppid=1 pid=1263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:20.459000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 20:08:20.460708 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:08:20.464468 systemd[1]: Started session-7.scope. Oct 2 20:08:20.464760 systemd-logind[1128]: New session 7 of user core. Oct 2 20:08:20.467000 audit[1263]: USER_START pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.468000 audit[1266]: CRED_ACQ pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:20.515000 audit[1267]: USER_ACCT pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.517298 sudo[1267]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 20:08:20.516000 audit[1267]: CRED_REFR pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:20.517511 sudo[1267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:08:20.517000 audit[1267]: USER_START pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.043018 systemd[1]: Reloading. Oct 2 20:08:21.093904 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2023-10-02T20:08:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:08:21.093929 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2023-10-02T20:08:21Z" level=info msg="torcx already run" Oct 2 20:08:21.154513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:08:21.154530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:08:21.171673 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.215000 audit: BPF prog-id=34 op=LOAD Oct 2 20:08:21.215000 audit: BPF prog-id=32 op=UNLOAD Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit: BPF prog-id=35 op=LOAD Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.216000 audit: BPF prog-id=36 op=LOAD Oct 2 20:08:21.216000 audit: BPF prog-id=21 op=UNLOAD Oct 2 20:08:21.216000 audit: BPF prog-id=22 op=UNLOAD Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit: BPF prog-id=37 op=LOAD Oct 2 20:08:21.218000 audit: BPF prog-id=23 op=UNLOAD Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.218000 audit: BPF prog-id=38 op=LOAD Oct 2 20:08:21.218000 audit: BPF prog-id=24 op=UNLOAD Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit: BPF prog-id=39 op=LOAD Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit: BPF prog-id=40 op=LOAD Oct 2 20:08:21.219000 audit: BPF prog-id=25 op=UNLOAD Oct 2 20:08:21.219000 audit: BPF prog-id=26 op=UNLOAD Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit: BPF prog-id=41 op=LOAD Oct 2 20:08:21.219000 audit: BPF prog-id=29 op=UNLOAD Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit: BPF prog-id=42 op=LOAD Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.219000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.220000 audit: BPF prog-id=43 op=LOAD Oct 2 20:08:21.220000 audit: BPF prog-id=30 op=UNLOAD Oct 2 20:08:21.220000 audit: BPF prog-id=31 op=UNLOAD Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit: BPF prog-id=44 op=LOAD Oct 2 20:08:21.221000 audit: BPF prog-id=18 op=UNLOAD Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit: BPF prog-id=45 op=LOAD Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.221000 audit: BPF prog-id=46 op=LOAD Oct 2 20:08:21.221000 audit: BPF prog-id=19 op=UNLOAD Oct 2 20:08:21.221000 audit: BPF prog-id=20 op=UNLOAD Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit: BPF prog-id=47 op=LOAD Oct 2 20:08:21.222000 audit: BPF prog-id=28 op=UNLOAD Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.222000 audit: BPF prog-id=48 op=LOAD Oct 2 20:08:21.222000 audit: BPF prog-id=27 op=UNLOAD Oct 2 20:08:21.230820 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:08:21.237012 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:08:21.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.237612 systemd[1]: Reached target network-online.target. Oct 2 20:08:21.239044 systemd[1]: Started kubelet.service. Oct 2 20:08:21.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.249377 systemd[1]: Starting coreos-metadata.service... Oct 2 20:08:21.257103 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 20:08:21.257517 systemd[1]: Finished coreos-metadata.service. Oct 2 20:08:21.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.446808 kubelet[1335]: E1002 20:08:21.446627 1335 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Oct 2 20:08:21.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 20:08:21.448981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 20:08:21.449098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 20:08:21.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.584602 systemd[1]: Stopped kubelet.service. Oct 2 20:08:21.598820 systemd[1]: Reloading. Oct 2 20:08:21.649271 /usr/lib/systemd/system-generators/torcx-generator[1403]: time="2023-10-02T20:08:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:08:21.649297 /usr/lib/systemd/system-generators/torcx-generator[1403]: time="2023-10-02T20:08:21Z" level=info msg="torcx already run" Oct 2 20:08:21.707981 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:08:21.707999 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:08:21.725320 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.768000 audit: BPF prog-id=49 op=LOAD Oct 2 20:08:21.768000 audit: BPF prog-id=34 op=UNLOAD Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit: BPF prog-id=50 op=LOAD Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.769000 audit: BPF prog-id=51 op=LOAD Oct 2 20:08:21.769000 audit: BPF prog-id=35 op=UNLOAD Oct 2 20:08:21.769000 audit: BPF prog-id=36 op=UNLOAD Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit: BPF prog-id=52 op=LOAD Oct 2 20:08:21.771000 audit: BPF prog-id=37 op=UNLOAD Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit: BPF prog-id=53 op=LOAD Oct 2 20:08:21.771000 audit: BPF prog-id=38 op=UNLOAD Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit: BPF prog-id=54 op=LOAD Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.771000 audit: BPF prog-id=55 op=LOAD Oct 2 20:08:21.771000 audit: BPF prog-id=39 op=UNLOAD Oct 2 20:08:21.771000 audit: BPF prog-id=40 op=UNLOAD Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit: BPF prog-id=56 op=LOAD Oct 2 20:08:21.772000 audit: BPF prog-id=41 op=UNLOAD Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit: BPF prog-id=57 op=LOAD Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.772000 audit: BPF prog-id=58 op=LOAD Oct 2 20:08:21.772000 audit: BPF prog-id=42 op=UNLOAD Oct 2 20:08:21.772000 audit: BPF prog-id=43 op=UNLOAD Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit: BPF prog-id=59 op=LOAD Oct 2 20:08:21.774000 audit: BPF prog-id=44 op=UNLOAD Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit: BPF prog-id=60 op=LOAD Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit: BPF prog-id=61 op=LOAD Oct 2 20:08:21.774000 audit: BPF prog-id=45 op=UNLOAD Oct 2 20:08:21.774000 audit: BPF prog-id=46 op=UNLOAD Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit: BPF prog-id=62 op=LOAD Oct 2 20:08:21.775000 audit: BPF prog-id=47 op=UNLOAD Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:21.775000 audit: BPF prog-id=63 op=LOAD Oct 2 20:08:21.775000 audit: BPF prog-id=48 op=UNLOAD Oct 2 20:08:21.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:21.789012 systemd[1]: Started kubelet.service. Oct 2 20:08:21.834060 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:08:21.834060 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:08:21.834403 kubelet[1440]: I1002 20:08:21.834296 1440 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 20:08:21.835518 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:08:21.835518 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:08:23.424885 kubelet[1440]: I1002 20:08:23.423142 1440 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Oct 2 20:08:23.424885 kubelet[1440]: I1002 20:08:23.423168 1440 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 20:08:23.424885 kubelet[1440]: I1002 20:08:23.423380 1440 server.go:836] "Client rotation is on, will bootstrap in background" Oct 2 20:08:23.428855 kubelet[1440]: W1002 20:08:23.428677 1440 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 20:08:23.432828 kubelet[1440]: I1002 20:08:23.429386 1440 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 20:08:23.432828 kubelet[1440]: I1002 20:08:23.429757 1440 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 20:08:23.432828 kubelet[1440]: I1002 20:08:23.429828 1440 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 20:08:23.432828 kubelet[1440]: I1002 20:08:23.429853 1440 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 20:08:23.432828 kubelet[1440]: I1002 20:08:23.429864 1440 container_manager_linux.go:308] "Creating device plugin manager" Oct 2 20:08:23.432828 kubelet[1440]: I1002 20:08:23.430019 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:08:23.433058 kubelet[1440]: I1002 20:08:23.430500 1440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 20:08:23.434951 kubelet[1440]: I1002 20:08:23.434824 1440 kubelet.go:398] "Attempting to sync node with API server" Oct 2 20:08:23.434951 kubelet[1440]: I1002 20:08:23.434847 1440 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 20:08:23.435091 kubelet[1440]: I1002 20:08:23.434999 1440 kubelet.go:297] "Adding apiserver pod source" Oct 2 20:08:23.435091 kubelet[1440]: I1002 20:08:23.435010 1440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 20:08:23.435255 kubelet[1440]: E1002 20:08:23.435238 1440 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:23.435324 kubelet[1440]: E1002 20:08:23.435302 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:23.436855 kubelet[1440]: I1002 20:08:23.436652 1440 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 20:08:23.438321 kubelet[1440]: W1002 20:08:23.438290 1440 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 20:08:23.438853 kubelet[1440]: I1002 20:08:23.438821 1440 server.go:1186] "Started kubelet" Oct 2 20:08:23.439362 kubelet[1440]: I1002 20:08:23.439342 1440 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 20:08:23.440425 kubelet[1440]: E1002 20:08:23.440397 1440 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 20:08:23.440481 kubelet[1440]: E1002 20:08:23.440434 1440 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 20:08:23.441000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:23.441000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:08:23.441000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000dfe270 a1=4000ea8390 a2=4000dfe210 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.441000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:08:23.441000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:23.441000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:08:23.441000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40002cef40 a1=4000ea83a8 a2=4000dfe3f0 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.441000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:08:23.445039 kubelet[1440]: I1002 20:08:23.441341 1440 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 20:08:23.445039 kubelet[1440]: I1002 20:08:23.441370 1440 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 20:08:23.445039 kubelet[1440]: I1002 20:08:23.441419 1440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 20:08:23.445039 kubelet[1440]: I1002 20:08:23.442154 1440 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 20:08:23.445039 kubelet[1440]: I1002 20:08:23.442391 1440 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 20:08:23.445039 kubelet[1440]: I1002 20:08:23.442725 1440 server.go:451] "Adding debug handlers to kubelet server" Oct 2 20:08:23.460944 kubelet[1440]: W1002 20:08:23.460896 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:08:23.460944 kubelet[1440]: E1002 20:08:23.460950 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:08:23.461084 kubelet[1440]: W1002 20:08:23.461014 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:23.461084 kubelet[1440]: E1002 20:08:23.461024 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:23.461131 kubelet[1440]: E1002 20:08:23.461048 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a63386006f424", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 438799908, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 438799908, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.461315 kubelet[1440]: W1002 20:08:23.461295 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:23.461374 kubelet[1440]: E1002 20:08:23.461319 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:23.461374 kubelet[1440]: E1002 20:08:23.461365 1440 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.13" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:08:23.462159 kubelet[1440]: E1002 20:08:23.462073 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338601faa67", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 440419431, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 440419431, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.463129 kubelet[1440]: I1002 20:08:23.463113 1440 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 20:08:23.463316 kubelet[1440]: I1002 20:08:23.463302 1440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 20:08:23.463380 kubelet[1440]: I1002 20:08:23.463370 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:08:23.464196 kubelet[1440]: E1002 20:08:23.464124 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.465155 kubelet[1440]: E1002 20:08:23.465088 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.465763 kubelet[1440]: I1002 20:08:23.465741 1440 policy_none.go:49] "None policy: Start" Oct 2 20:08:23.466364 kubelet[1440]: E1002 20:08:23.466304 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.466873 kubelet[1440]: I1002 20:08:23.466805 1440 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 20:08:23.466988 kubelet[1440]: I1002 20:08:23.466975 1440 state_mem.go:35] "Initializing new in-memory state store" Oct 2 20:08:23.472079 systemd[1]: Created slice kubepods.slice. Oct 2 20:08:23.475903 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 20:08:23.481795 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 20:08:23.481000 audit[1457]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.481000 audit[1457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc2905810 a2=0 a3=1 items=0 ppid=1440 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:08:23.482000 audit[1459]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.482000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe4628e30 a2=0 a3=1 items=0 ppid=1440 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:08:23.486874 kubelet[1440]: I1002 20:08:23.486849 1440 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 20:08:23.485000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:23.485000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:08:23.485000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e8de00 a1=4000e0bb30 a2=4000e8ddd0 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.485000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:08:23.487148 kubelet[1440]: I1002 20:08:23.487005 1440 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 20:08:23.487228 kubelet[1440]: I1002 20:08:23.487197 1440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 20:08:23.487983 kubelet[1440]: E1002 20:08:23.487956 1440 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.13\" not found" Oct 2 20:08:23.489985 kubelet[1440]: E1002 20:08:23.489904 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633862fd740a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 488508938, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 488508938, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.484000 audit[1461]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.484000 audit[1461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff03696f0 a2=0 a3=1 items=0 ppid=1440 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:08:23.506000 audit[1467]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.506000 audit[1467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe50f5d20 a2=0 a3=1 items=0 ppid=1440 pid=1467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:08:23.543341 kubelet[1440]: I1002 20:08:23.543288 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:23.542000 audit[1472]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.542000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff40caef0 a2=0 a3=1 items=0 ppid=1440 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 20:08:23.544814 kubelet[1440]: E1002 20:08:23.544770 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 20:08:23.544879 kubelet[1440]: E1002 20:08:23.544775 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 542792108, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a6338616ff2d7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.544000 audit[1473]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.544000 audit[1473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe5bf13b0 a2=0 a3=1 items=0 ppid=1440 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:08:23.545839 kubelet[1440]: E1002 20:08:23.545747 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 543251899, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861702381" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.546783 kubelet[1440]: E1002 20:08:23.546721 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 543255668, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861703036" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.549000 audit[1476]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.549000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe62f6470 a2=0 a3=1 items=0 ppid=1440 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:08:23.554000 audit[1479]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.554000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc97eb6a0 a2=0 a3=1 items=0 ppid=1440 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:08:23.555000 audit[1480]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.555000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffec097b00 a2=0 a3=1 items=0 ppid=1440 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.555000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:08:23.556000 audit[1481]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.556000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2c9ad10 a2=0 a3=1 items=0 ppid=1440 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:08:23.560000 audit[1483]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.560000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffeab25da0 a2=0 a3=1 items=0 ppid=1440 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:08:23.562000 audit[1485]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.562000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdafeaab0 a2=0 a3=1 items=0 ppid=1440 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:08:23.584000 audit[1488]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.584000 audit[1488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc8855ff0 a2=0 a3=1 items=0 ppid=1440 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:08:23.586000 audit[1490]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.586000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff81e05d0 a2=0 a3=1 items=0 ppid=1440 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:08:23.593000 audit[1493]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.593000 audit[1493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffffa818720 a2=0 a3=1 items=0 ppid=1440 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:08:23.594714 kubelet[1440]: I1002 20:08:23.594680 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 20:08:23.594000 audit[1494]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.594000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffef5f4480 a2=0 a3=1 items=0 ppid=1440 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:08:23.594000 audit[1495]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.594000 audit[1495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2e12b70 a2=0 a3=1 items=0 ppid=1440 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:08:23.595000 audit[1496]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.595000 audit[1496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe7722f50 a2=0 a3=1 items=0 ppid=1440 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.595000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:08:23.595000 audit[1497]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.595000 audit[1497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc190a9c0 a2=0 a3=1 items=0 ppid=1440 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:08:23.596000 audit[1499]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:23.596000 audit[1499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6245c10 a2=0 a3=1 items=0 ppid=1440 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:08:23.597000 audit[1500]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.597000 audit[1500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffece4c5d0 a2=0 a3=1 items=0 ppid=1440 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:08:23.598000 audit[1501]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.598000 audit[1501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff2d90230 a2=0 a3=1 items=0 ppid=1440 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:08:23.601000 audit[1503]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.601000 audit[1503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffc2ed520 a2=0 a3=1 items=0 ppid=1440 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:08:23.602000 audit[1504]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.602000 audit[1504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2713e80 a2=0 a3=1 items=0 ppid=1440 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:08:23.603000 audit[1505]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.603000 audit[1505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee837780 a2=0 a3=1 items=0 ppid=1440 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:08:23.606000 audit[1507]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.606000 audit[1507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe66bd0a0 a2=0 a3=1 items=0 ppid=1440 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:08:23.608000 audit[1509]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.608000 audit[1509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc67cb140 a2=0 a3=1 items=0 ppid=1440 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.608000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:08:23.611000 audit[1511]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.611000 audit[1511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd2ed9ac0 a2=0 a3=1 items=0 ppid=1440 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:08:23.613000 audit[1513]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.613000 audit[1513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe85791d0 a2=0 a3=1 items=0 ppid=1440 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:08:23.617000 audit[1515]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.617000 audit[1515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffc01c5fd0 a2=0 a3=1 items=0 ppid=1440 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:08:23.619073 kubelet[1440]: I1002 20:08:23.619053 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 20:08:23.619163 kubelet[1440]: I1002 20:08:23.619152 1440 status_manager.go:176] "Starting to sync pod status with apiserver" Oct 2 20:08:23.619271 kubelet[1440]: I1002 20:08:23.619260 1440 kubelet.go:2113] "Starting kubelet main sync loop" Oct 2 20:08:23.619379 kubelet[1440]: E1002 20:08:23.619368 1440 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 20:08:23.618000 audit[1516]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.618000 audit[1516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7981d50 a2=0 a3=1 items=0 ppid=1440 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:08:23.620864 kubelet[1440]: W1002 20:08:23.620836 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:23.620920 kubelet[1440]: E1002 20:08:23.620873 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:23.620000 audit[1517]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.620000 audit[1517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdebdde60 a2=0 a3=1 items=0 ppid=1440 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.620000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:08:23.621000 audit[1518]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:23.621000 audit[1518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe11ac7d0 a2=0 a3=1 items=0 ppid=1440 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:23.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:08:23.663258 kubelet[1440]: E1002 20:08:23.663212 1440 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.13" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:08:23.746322 kubelet[1440]: I1002 20:08:23.746206 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:23.749713 kubelet[1440]: E1002 20:08:23.749640 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 746169439, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a6338616ff2d7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.750022 kubelet[1440]: E1002 20:08:23.749768 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 20:08:23.750746 kubelet[1440]: E1002 20:08:23.750688 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 746174239, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861702381" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:23.841462 kubelet[1440]: E1002 20:08:23.841356 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 746176977, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861703036" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:24.064812 kubelet[1440]: E1002 20:08:24.064708 1440 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.13" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:08:24.150824 kubelet[1440]: I1002 20:08:24.150800 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:24.152202 kubelet[1440]: E1002 20:08:24.152131 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 24, 150759797, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a6338616ff2d7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:24.152384 kubelet[1440]: E1002 20:08:24.152356 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 20:08:24.242005 kubelet[1440]: E1002 20:08:24.241901 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 24, 150771433, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861702381" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:24.435878 kubelet[1440]: E1002 20:08:24.435785 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:24.441475 kubelet[1440]: E1002 20:08:24.441383 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 24, 150774412, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861703036" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:24.821572 kubelet[1440]: W1002 20:08:24.821480 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:24.821572 kubelet[1440]: E1002 20:08:24.821511 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:24.866678 kubelet[1440]: E1002 20:08:24.866640 1440 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.13" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:08:24.919974 kubelet[1440]: W1002 20:08:24.919943 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:24.920188 kubelet[1440]: E1002 20:08:24.920175 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:24.953990 kubelet[1440]: I1002 20:08:24.953959 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:24.955233 kubelet[1440]: E1002 20:08:24.955187 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 20:08:24.955233 kubelet[1440]: E1002 20:08:24.955142 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 24, 953920567, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a6338616ff2d7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:24.956090 kubelet[1440]: E1002 20:08:24.956029 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 24, 953930257, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861702381" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:24.971345 kubelet[1440]: W1002 20:08:24.971315 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:08:24.971345 kubelet[1440]: E1002 20:08:24.971348 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:08:25.043752 kubelet[1440]: E1002 20:08:25.043653 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 24, 953933355, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861703036" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:25.104217 kubelet[1440]: W1002 20:08:25.104131 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:25.104217 kubelet[1440]: E1002 20:08:25.104158 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:25.438255 kubelet[1440]: E1002 20:08:25.438102 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:26.439019 kubelet[1440]: E1002 20:08:26.438946 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:26.468696 kubelet[1440]: E1002 20:08:26.468639 1440 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.13" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:08:26.560818 kubelet[1440]: I1002 20:08:26.560790 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:26.561960 kubelet[1440]: E1002 20:08:26.561926 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 20:08:26.562019 kubelet[1440]: E1002 20:08:26.561941 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 26, 560744226, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a6338616ff2d7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:26.562915 kubelet[1440]: E1002 20:08:26.562844 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 26, 560759741, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861702381" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:26.563697 kubelet[1440]: E1002 20:08:26.563637 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 26, 560762963, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861703036" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:27.332761 kubelet[1440]: W1002 20:08:27.332711 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:27.332761 kubelet[1440]: E1002 20:08:27.332750 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:27.439497 kubelet[1440]: E1002 20:08:27.439443 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:27.791816 kubelet[1440]: W1002 20:08:27.790524 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:27.791816 kubelet[1440]: E1002 20:08:27.790563 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:27.961743 kubelet[1440]: W1002 20:08:27.961685 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:27.961743 kubelet[1440]: E1002 20:08:27.961722 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:28.046734 kubelet[1440]: W1002 20:08:28.046383 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:08:28.046734 kubelet[1440]: E1002 20:08:28.046418 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:08:28.440147 kubelet[1440]: E1002 20:08:28.439945 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:29.440912 kubelet[1440]: E1002 20:08:29.440802 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:29.670784 kubelet[1440]: E1002 20:08:29.670735 1440 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.13" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:08:29.763004 kubelet[1440]: I1002 20:08:29.762905 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:29.764466 kubelet[1440]: E1002 20:08:29.764440 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.13" Oct 2 20:08:29.764599 kubelet[1440]: E1002 20:08:29.764434 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a6338616ff2d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462458071, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 29, 762871908, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a6338616ff2d7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:29.765724 kubelet[1440]: E1002 20:08:29.765647 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861702381", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462470529, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 29, 762876611, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861702381" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:29.766627 kubelet[1440]: E1002 20:08:29.766570 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.13.178a633861703036", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.13", UID:"10.0.0.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.13"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 8, 23, 462473782, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 8, 29, 762879640, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.13.178a633861703036" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:08:30.441612 kubelet[1440]: E1002 20:08:30.441553 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:30.941256 kubelet[1440]: W1002 20:08:30.941126 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:30.941256 kubelet[1440]: E1002 20:08:30.941163 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:08:31.442205 kubelet[1440]: E1002 20:08:31.442176 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:32.442333 kubelet[1440]: E1002 20:08:32.442287 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:32.742134 kubelet[1440]: W1002 20:08:32.742045 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:32.742134 kubelet[1440]: E1002 20:08:32.742080 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:08:33.351645 kubelet[1440]: W1002 20:08:33.351596 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:33.351645 kubelet[1440]: E1002 20:08:33.351631 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:08:33.428057 kubelet[1440]: I1002 20:08:33.428011 1440 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 20:08:33.443305 kubelet[1440]: E1002 20:08:33.443273 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:33.488526 kubelet[1440]: E1002 20:08:33.488478 1440 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.13\" not found" Oct 2 20:08:33.811662 kubelet[1440]: E1002 20:08:33.811552 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.13" not found Oct 2 20:08:34.443916 kubelet[1440]: E1002 20:08:34.443881 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:35.153785 kubelet[1440]: E1002 20:08:35.153749 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.13" not found Oct 2 20:08:35.444317 kubelet[1440]: E1002 20:08:35.444198 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:36.075783 kubelet[1440]: E1002 20:08:36.075748 1440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.13\" not found" node="10.0.0.13" Oct 2 20:08:36.166046 kubelet[1440]: I1002 20:08:36.166019 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.13" Oct 2 20:08:36.444910 kubelet[1440]: E1002 20:08:36.444805 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:36.555202 kubelet[1440]: I1002 20:08:36.555163 1440 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.13" Oct 2 20:08:36.561701 kubelet[1440]: E1002 20:08:36.561666 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:36.662363 kubelet[1440]: E1002 20:08:36.662326 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:36.763548 kubelet[1440]: E1002 20:08:36.763403 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:36.804158 sudo[1267]: pam_unix(sudo:session): session closed for user root Oct 2 20:08:36.805649 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 20:08:36.805685 kernel: audit: type=1106 audit(1696277316.802:568): pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:36.802000 audit[1267]: USER_END pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:36.805519 sshd[1263]: pam_unix(sshd:session): session closed for user core Oct 2 20:08:36.807872 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:45240.service: Deactivated successfully. Oct 2 20:08:36.803000 audit[1267]: CRED_DISP pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:36.808590 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 20:08:36.809090 systemd-logind[1128]: Session 7 logged out. Waiting for processes to exit. Oct 2 20:08:36.809748 systemd-logind[1128]: Removed session 7. Oct 2 20:08:36.811309 kernel: audit: type=1104 audit(1696277316.803:569): pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:08:36.811378 kernel: audit: type=1106 audit(1696277316.805:570): pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:36.805000 audit[1263]: USER_END pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:36.805000 audit[1263]: CRED_DISP pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:36.817366 kernel: audit: type=1104 audit(1696277316.805:571): pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 20:08:36.817410 kernel: audit: type=1131 audit(1696277316.806:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:36.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.13:22-10.0.0.1:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:08:36.863895 kubelet[1440]: E1002 20:08:36.863866 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:36.964131 kubelet[1440]: E1002 20:08:36.964088 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.064678 kubelet[1440]: E1002 20:08:37.064579 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.165378 kubelet[1440]: E1002 20:08:37.165351 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.265797 kubelet[1440]: E1002 20:08:37.265762 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.366272 kubelet[1440]: E1002 20:08:37.366168 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.445933 kubelet[1440]: E1002 20:08:37.445897 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:37.467023 kubelet[1440]: E1002 20:08:37.467001 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.567622 kubelet[1440]: E1002 20:08:37.567590 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.668252 kubelet[1440]: E1002 20:08:37.668126 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.768715 kubelet[1440]: E1002 20:08:37.768669 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.869145 kubelet[1440]: E1002 20:08:37.869113 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:37.969896 kubelet[1440]: E1002 20:08:37.969803 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.070298 kubelet[1440]: E1002 20:08:38.070247 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.170601 kubelet[1440]: E1002 20:08:38.170580 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.271106 kubelet[1440]: E1002 20:08:38.270993 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.371513 kubelet[1440]: E1002 20:08:38.371473 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.447342 kubelet[1440]: E1002 20:08:38.447296 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:38.472413 kubelet[1440]: E1002 20:08:38.472390 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.572623 kubelet[1440]: E1002 20:08:38.572537 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.672761 kubelet[1440]: E1002 20:08:38.672732 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.773244 kubelet[1440]: E1002 20:08:38.773199 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.873739 kubelet[1440]: E1002 20:08:38.873640 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:38.974281 kubelet[1440]: E1002 20:08:38.974219 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.074776 kubelet[1440]: E1002 20:08:39.074735 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.175269 kubelet[1440]: E1002 20:08:39.175164 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.275587 kubelet[1440]: E1002 20:08:39.275562 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.376025 kubelet[1440]: E1002 20:08:39.375998 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.447840 kubelet[1440]: E1002 20:08:39.447724 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:39.476884 kubelet[1440]: E1002 20:08:39.476849 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.577426 kubelet[1440]: E1002 20:08:39.577394 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.677750 kubelet[1440]: E1002 20:08:39.677708 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.778290 kubelet[1440]: E1002 20:08:39.778167 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.13\" not found" Oct 2 20:08:39.879661 kubelet[1440]: I1002 20:08:39.879628 1440 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 20:08:39.879960 env[1140]: time="2023-10-02T20:08:39.879913362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 20:08:39.880211 kubelet[1440]: I1002 20:08:39.880105 1440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 20:08:40.446093 kubelet[1440]: I1002 20:08:40.446050 1440 apiserver.go:52] "Watching apiserver" Oct 2 20:08:40.448230 kubelet[1440]: E1002 20:08:40.448189 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:40.449851 kubelet[1440]: I1002 20:08:40.449818 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 20:08:40.449916 kubelet[1440]: I1002 20:08:40.449902 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 20:08:40.455200 systemd[1]: Created slice kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice. Oct 2 20:08:40.469127 systemd[1]: Created slice kubepods-besteffort-pode6e29591_fae6_4383_ae75_de5a6c8eb6fc.slice. Oct 2 20:08:40.544075 kubelet[1440]: I1002 20:08:40.544040 1440 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 20:08:40.629785 kubelet[1440]: I1002 20:08:40.629756 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hostproc\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.629994 kubelet[1440]: I1002 20:08:40.629979 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-etc-cni-netd\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630108 kubelet[1440]: I1002 20:08:40.630093 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-kernel\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630201 kubelet[1440]: I1002 20:08:40.630191 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtjx\" (UniqueName: \"kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-kube-api-access-sjtjx\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630323 kubelet[1440]: I1002 20:08:40.630311 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6e29591-fae6-4383-ae75-de5a6c8eb6fc-lib-modules\") pod \"kube-proxy-zpqnx\" (UID: \"e6e29591-fae6-4383-ae75-de5a6c8eb6fc\") " pod="kube-system/kube-proxy-zpqnx" Oct 2 20:08:40.630407 kubelet[1440]: I1002 20:08:40.630397 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cni-path\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630478 kubelet[1440]: I1002 20:08:40.630468 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-xtables-lock\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630590 kubelet[1440]: I1002 20:08:40.630547 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6e29591-fae6-4383-ae75-de5a6c8eb6fc-kube-proxy\") pod \"kube-proxy-zpqnx\" (UID: \"e6e29591-fae6-4383-ae75-de5a6c8eb6fc\") " pod="kube-system/kube-proxy-zpqnx" Oct 2 20:08:40.630590 kubelet[1440]: I1002 20:08:40.630590 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsgg\" (UniqueName: \"kubernetes.io/projected/e6e29591-fae6-4383-ae75-de5a6c8eb6fc-kube-api-access-rvsgg\") pod \"kube-proxy-zpqnx\" (UID: \"e6e29591-fae6-4383-ae75-de5a6c8eb6fc\") " pod="kube-system/kube-proxy-zpqnx" Oct 2 20:08:40.630657 kubelet[1440]: I1002 20:08:40.630612 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-cgroup\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630657 kubelet[1440]: I1002 20:08:40.630632 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e084e2d-c888-44cb-a7cc-b0a905c1c524-clustermesh-secrets\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630699 kubelet[1440]: I1002 20:08:40.630670 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-net\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630739 kubelet[1440]: I1002 20:08:40.630723 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hubble-tls\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630775 kubelet[1440]: I1002 20:08:40.630752 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6e29591-fae6-4383-ae75-de5a6c8eb6fc-xtables-lock\") pod \"kube-proxy-zpqnx\" (UID: \"e6e29591-fae6-4383-ae75-de5a6c8eb6fc\") " pod="kube-system/kube-proxy-zpqnx" Oct 2 20:08:40.630801 kubelet[1440]: I1002 20:08:40.630790 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-run\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630826 kubelet[1440]: I1002 20:08:40.630811 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-bpf-maps\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630849 kubelet[1440]: I1002 20:08:40.630841 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-lib-modules\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630887 kubelet[1440]: I1002 20:08:40.630875 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-config-path\") pod \"cilium-97rjw\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " pod="kube-system/cilium-97rjw" Oct 2 20:08:40.630926 kubelet[1440]: I1002 20:08:40.630893 1440 reconciler.go:41] "Reconciler: start to sync state" Oct 2 20:08:40.781826 kubelet[1440]: E1002 20:08:40.781159 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:40.781955 env[1140]: time="2023-10-02T20:08:40.781892268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zpqnx,Uid:e6e29591-fae6-4383-ae75-de5a6c8eb6fc,Namespace:kube-system,Attempt:0,}" Oct 2 20:08:41.067920 kubelet[1440]: E1002 20:08:41.067721 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:41.068511 env[1140]: time="2023-10-02T20:08:41.068473630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97rjw,Uid:6e084e2d-c888-44cb-a7cc-b0a905c1c524,Namespace:kube-system,Attempt:0,}" Oct 2 20:08:41.320381 env[1140]: time="2023-10-02T20:08:41.320172121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.321332 env[1140]: time="2023-10-02T20:08:41.321298452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.322573 env[1140]: time="2023-10-02T20:08:41.322542417Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.324005 env[1140]: time="2023-10-02T20:08:41.323977361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.324771 env[1140]: time="2023-10-02T20:08:41.324734165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.327150 env[1140]: time="2023-10-02T20:08:41.327116892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.328805 env[1140]: time="2023-10-02T20:08:41.328778750Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.329595 env[1140]: time="2023-10-02T20:08:41.329559895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:41.364906 env[1140]: time="2023-10-02T20:08:41.364816680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:08:41.364906 env[1140]: time="2023-10-02T20:08:41.364858729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:08:41.364906 env[1140]: time="2023-10-02T20:08:41.364868962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:08:41.365881 env[1140]: time="2023-10-02T20:08:41.365834212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fef3dd38f4b5549d8c9affec319972d39c8409d0c9a38eeb4d5c4b284b6d5dcc pid=1542 runtime=io.containerd.runc.v2 Oct 2 20:08:41.366190 env[1140]: time="2023-10-02T20:08:41.366101775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:08:41.366190 env[1140]: time="2023-10-02T20:08:41.366133471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:08:41.366190 env[1140]: time="2023-10-02T20:08:41.366143464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:08:41.366401 env[1140]: time="2023-10-02T20:08:41.366362463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb pid=1541 runtime=io.containerd.runc.v2 Oct 2 20:08:41.389714 systemd[1]: Started cri-containerd-33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb.scope. Oct 2 20:08:41.390999 systemd[1]: Started cri-containerd-fef3dd38f4b5549d8c9affec319972d39c8409d0c9a38eeb4d5c4b284b6d5dcc.scope. Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421175 kernel: audit: type=1400 audit(1696277321.415:573): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421234 kernel: audit: type=1400 audit(1696277321.415:574): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421255 kernel: audit: type=1400 audit(1696277321.415:575): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425794 kernel: audit: type=1400 audit(1696277321.415:576): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425839 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.415000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.417000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.417000 audit: BPF prog-id=64 op=LOAD Oct 2 20:08:41.419000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.419000 audit[1560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=1541 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333666366333330313965623937323037333737653337353863613036 Oct 2 20:08:41.419000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.419000 audit[1560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=1541 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333666366333330313965623937323037333737653337353863613036 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.420000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit: BPF prog-id=65 op=LOAD Oct 2 20:08:41.421000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.421000 audit: BPF prog-id=66 op=LOAD Oct 2 20:08:41.421000 audit[1560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=1541 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333666366333330313965623937323037333737653337353863613036 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=1542 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663364643338663462353534396438633961666665633331393937 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=1542 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663364643338663462353534396438633961666665633331393937 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit: BPF prog-id=67 op=LOAD Oct 2 20:08:41.422000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=1542 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663364643338663462353534396438633961666665633331393937 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit: BPF prog-id=68 op=LOAD Oct 2 20:08:41.425000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=1542 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663364643338663462353534396438633961666665633331393937 Oct 2 20:08:41.425000 audit: BPF prog-id=68 op=UNLOAD Oct 2 20:08:41.425000 audit: BPF prog-id=67 op=UNLOAD Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit: BPF prog-id=69 op=LOAD Oct 2 20:08:41.425000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=1542 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663364643338663462353534396438633961666665633331393937 Oct 2 20:08:41.422000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.422000 audit: BPF prog-id=70 op=LOAD Oct 2 20:08:41.422000 audit[1560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=1541 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333666366333330313965623937323037333737653337353863613036 Oct 2 20:08:41.425000 audit: BPF prog-id=70 op=UNLOAD Oct 2 20:08:41.425000 audit: BPF prog-id=66 op=UNLOAD Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { perfmon } for pid=1560 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit[1560]: AVC avc: denied { bpf } for pid=1560 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:41.425000 audit: BPF prog-id=71 op=LOAD Oct 2 20:08:41.425000 audit[1560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=1541 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:41.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333666366333330313965623937323037333737653337353863613036 Oct 2 20:08:41.441171 env[1140]: time="2023-10-02T20:08:41.441124148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zpqnx,Uid:e6e29591-fae6-4383-ae75-de5a6c8eb6fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fef3dd38f4b5549d8c9affec319972d39c8409d0c9a38eeb4d5c4b284b6d5dcc\"" Oct 2 20:08:41.442025 env[1140]: time="2023-10-02T20:08:41.441981278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-97rjw,Uid:6e084e2d-c888-44cb-a7cc-b0a905c1c524,Namespace:kube-system,Attempt:0,} returns sandbox id \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\"" Oct 2 20:08:41.442542 kubelet[1440]: E1002 20:08:41.442516 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:41.442656 kubelet[1440]: E1002 20:08:41.442641 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:41.444163 env[1140]: time="2023-10-02T20:08:41.444128658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\"" Oct 2 20:08:41.448751 kubelet[1440]: E1002 20:08:41.448726 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:41.738902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126053190.mount: Deactivated successfully. Oct 2 20:08:42.449480 kubelet[1440]: E1002 20:08:42.449421 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:42.602442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745904331.mount: Deactivated successfully. Oct 2 20:08:42.947054 env[1140]: time="2023-10-02T20:08:42.946998231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:42.948293 env[1140]: time="2023-10-02T20:08:42.947829096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:42.949648 env[1140]: time="2023-10-02T20:08:42.949574253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:42.951853 env[1140]: time="2023-10-02T20:08:42.950951007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d8c8e3e8fe630c3f2d84a22722d4891343196483ac4cc02c1ba9345b1bfc8a3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:42.951853 env[1140]: time="2023-10-02T20:08:42.951471871Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\" returns image reference \"sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343\"" Oct 2 20:08:42.953501 env[1140]: time="2023-10-02T20:08:42.953383001Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 20:08:42.954760 env[1140]: time="2023-10-02T20:08:42.954727976Z" level=info msg="CreateContainer within sandbox \"fef3dd38f4b5549d8c9affec319972d39c8409d0c9a38eeb4d5c4b284b6d5dcc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 20:08:42.965928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612487343.mount: Deactivated successfully. Oct 2 20:08:42.977372 env[1140]: time="2023-10-02T20:08:42.977313880Z" level=info msg="CreateContainer within sandbox \"fef3dd38f4b5549d8c9affec319972d39c8409d0c9a38eeb4d5c4b284b6d5dcc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef6c2cbba3e219feacda7c99acfd746b9b60ce2b9fad05bf53170de208e19b95\"" Oct 2 20:08:42.978398 env[1140]: time="2023-10-02T20:08:42.978365323Z" level=info msg="StartContainer for \"ef6c2cbba3e219feacda7c99acfd746b9b60ce2b9fad05bf53170de208e19b95\"" Oct 2 20:08:43.006906 systemd[1]: Started cri-containerd-ef6c2cbba3e219feacda7c99acfd746b9b60ce2b9fad05bf53170de208e19b95.scope. Oct 2 20:08:43.021000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.026434 kernel: kauditd_printk_skb: 111 callbacks suppressed Oct 2 20:08:43.026529 kernel: audit: type=1400 audit(1696277323.021:609): avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.026556 kernel: audit: type=1300 audit(1696277323.021:609): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001155a0 a2=3c a3=0 items=0 ppid=1542 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.021000 audit[1617]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001155a0 a2=3c a3=0 items=0 ppid=1542 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366332636262613365323139666561636461376339396163666437 Oct 2 20:08:43.033537 kernel: audit: type=1327 audit(1696277323.021:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366332636262613365323139666561636461376339396163666437 Oct 2 20:08:43.033589 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.038125 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.040647 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.043195 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.045815 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.045871 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.050468 kernel: audit: type=1400 audit(1696277323.022:610): avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit: BPF prog-id=72 op=LOAD Oct 2 20:08:43.022000 audit[1617]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001158e0 a2=78 a3=0 items=0 ppid=1542 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366332636262613365323139666561636461376339396163666437 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.022000 audit: BPF prog-id=73 op=LOAD Oct 2 20:08:43.022000 audit[1617]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000115670 a2=78 a3=0 items=0 ppid=1542 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366332636262613365323139666561636461376339396163666437 Oct 2 20:08:43.025000 audit: BPF prog-id=73 op=UNLOAD Oct 2 20:08:43.025000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { perfmon } for pid=1617 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit[1617]: AVC avc: denied { bpf } for pid=1617 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:08:43.025000 audit: BPF prog-id=74 op=LOAD Oct 2 20:08:43.025000 audit[1617]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000115b40 a2=78 a3=0 items=0 ppid=1542 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366332636262613365323139666561636461376339396163666437 Oct 2 20:08:43.060305 env[1140]: time="2023-10-02T20:08:43.059591170Z" level=info msg="StartContainer for \"ef6c2cbba3e219feacda7c99acfd746b9b60ce2b9fad05bf53170de208e19b95\" returns successfully" Oct 2 20:08:43.156000 audit[1670]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.156000 audit[1670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffdf51ae0 a2=0 a3=ffffa6b586c0 items=0 ppid=1628 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:08:43.157000 audit[1669]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.157000 audit[1669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc98b7940 a2=0 a3=ffff9d7e26c0 items=0 ppid=1628 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:08:43.158000 audit[1672]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.158000 audit[1672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1328600 a2=0 a3=ffff8e1836c0 items=0 ppid=1628 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:08:43.159000 audit[1673]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.159000 audit[1673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8bd99b0 a2=0 a3=ffffabde26c0 items=0 ppid=1628 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:08:43.160000 audit[1674]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.160000 audit[1674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe158dcd0 a2=0 a3=ffffbe6296c0 items=0 ppid=1628 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:08:43.161000 audit[1675]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.161000 audit[1675]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1ad0790 a2=0 a3=ffff9085d6c0 items=0 ppid=1628 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:08:43.259000 audit[1676]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.259000 audit[1676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff40b22f0 a2=0 a3=ffff80df36c0 items=0 ppid=1628 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:08:43.263000 audit[1678]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.263000 audit[1678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd95b2060 a2=0 a3=ffff80c786c0 items=0 ppid=1628 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 20:08:43.267000 audit[1681]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.267000 audit[1681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe4b8cf10 a2=0 a3=ffff8b5af6c0 items=0 ppid=1628 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 20:08:43.268000 audit[1682]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.268000 audit[1682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef0b08a0 a2=0 a3=ffffb22da6c0 items=0 ppid=1628 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:08:43.271000 audit[1684]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.271000 audit[1684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7454dd0 a2=0 a3=ffff90ccb6c0 items=0 ppid=1628 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:08:43.272000 audit[1685]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.272000 audit[1685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc01980c0 a2=0 a3=ffffb02b66c0 items=0 ppid=1628 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.272000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:08:43.274000 audit[1687]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.274000 audit[1687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc4aac390 a2=0 a3=ffffa79d36c0 items=0 ppid=1628 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:08:43.278000 audit[1690]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.278000 audit[1690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd600d7f0 a2=0 a3=ffffa72946c0 items=0 ppid=1628 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 20:08:43.279000 audit[1691]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.279000 audit[1691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee17c360 a2=0 a3=ffff8566e6c0 items=0 ppid=1628 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:08:43.283000 audit[1693]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.283000 audit[1693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffdee31d0 a2=0 a3=ffff9d3c36c0 items=0 ppid=1628 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:08:43.284000 audit[1694]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.284000 audit[1694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4f7f1e0 a2=0 a3=ffffa1c466c0 items=0 ppid=1628 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:08:43.286000 audit[1696]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.286000 audit[1696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc480d910 a2=0 a3=ffffa90f86c0 items=0 ppid=1628 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:08:43.290000 audit[1699]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.290000 audit[1699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5dc1a70 a2=0 a3=ffff8936a6c0 items=0 ppid=1628 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:08:43.296000 audit[1702]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.296000 audit[1702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9e4d010 a2=0 a3=ffff950316c0 items=0 ppid=1628 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.296000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:08:43.297000 audit[1703]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.297000 audit[1703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1545020 a2=0 a3=ffffa738b6c0 items=0 ppid=1628 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:08:43.300000 audit[1705]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.300000 audit[1705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdd1c4170 a2=0 a3=ffff870e76c0 items=0 ppid=1628 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:08:43.303000 audit[1708]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:08:43.303000 audit[1708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc141d4a0 a2=0 a3=ffff9b0746c0 items=0 ppid=1628 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:08:43.314000 audit[1712]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:08:43.314000 audit[1712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffdd704a00 a2=0 a3=ffff8338b6c0 items=0 ppid=1628 pid=1712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:08:43.322000 audit[1712]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:08:43.322000 audit[1712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffdd704a00 a2=0 a3=ffff8338b6c0 items=0 ppid=1628 pid=1712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:08:43.336000 audit[1718]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1718 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.336000 audit[1718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff7016f60 a2=0 a3=ffff8bb0c6c0 items=0 ppid=1628 pid=1718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:08:43.340000 audit[1720]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.340000 audit[1720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd1561e40 a2=0 a3=ffffa73c66c0 items=0 ppid=1628 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 20:08:43.343000 audit[1723]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.343000 audit[1723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffe060ea0 a2=0 a3=ffffbd5206c0 items=0 ppid=1628 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 20:08:43.345000 audit[1724]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.345000 audit[1724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7a56490 a2=0 a3=ffffb822b6c0 items=0 ppid=1628 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:08:43.347000 audit[1726]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.347000 audit[1726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe731cde0 a2=0 a3=ffff8b4de6c0 items=0 ppid=1628 pid=1726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:08:43.348000 audit[1727]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.348000 audit[1727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0d3d930 a2=0 a3=ffff8552b6c0 items=0 ppid=1628 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:08:43.351000 audit[1729]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1729 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.351000 audit[1729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe690fcb0 a2=0 a3=ffffb99ce6c0 items=0 ppid=1628 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 20:08:43.355000 audit[1732]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.355000 audit[1732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcf473b40 a2=0 a3=ffffab0fb6c0 items=0 ppid=1628 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:08:43.356000 audit[1733]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.356000 audit[1733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb0c7000 a2=0 a3=ffffaac376c0 items=0 ppid=1628 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:08:43.359000 audit[1735]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.359000 audit[1735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc66d8330 a2=0 a3=ffffba7fb6c0 items=0 ppid=1628 pid=1735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:08:43.361000 audit[1736]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.361000 audit[1736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff12030c0 a2=0 a3=ffff837aa6c0 items=0 ppid=1628 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:08:43.363000 audit[1738]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.363000 audit[1738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc393fea0 a2=0 a3=ffffa29c46c0 items=0 ppid=1628 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:08:43.367000 audit[1741]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1741 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.367000 audit[1741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe30f9390 a2=0 a3=ffffb99806c0 items=0 ppid=1628 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:08:43.371000 audit[1744]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1744 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.371000 audit[1744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb9e9b00 a2=0 a3=ffffb72346c0 items=0 ppid=1628 pid=1744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 20:08:43.373000 audit[1745]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.373000 audit[1745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe7333300 a2=0 a3=ffff9ab6c6c0 items=0 ppid=1628 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:08:43.375000 audit[1747]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.375000 audit[1747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffffbb59f90 a2=0 a3=ffffa5b2b6c0 items=0 ppid=1628 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:08:43.379000 audit[1750]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1750 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:08:43.379000 audit[1750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd9035660 a2=0 a3=ffff7fa076c0 items=0 ppid=1628 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.379000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:08:43.385000 audit[1754]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:08:43.385000 audit[1754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffed2fffa0 a2=0 a3=ffffbd5306c0 items=0 ppid=1628 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.385000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:08:43.385000 audit[1754]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:08:43.385000 audit[1754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffed2fffa0 a2=0 a3=ffffbd5306c0 items=0 ppid=1628 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:08:43.385000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:08:43.435351 kubelet[1440]: E1002 20:08:43.435312 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:43.449577 kubelet[1440]: E1002 20:08:43.449531 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:43.651306 kubelet[1440]: E1002 20:08:43.651277 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:43.660786 kubelet[1440]: I1002 20:08:43.660748 1440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zpqnx" podStartSLOduration=-9.223372029194073e+09 pod.CreationTimestamp="2023-10-02 20:08:36 +0000 UTC" firstStartedPulling="2023-10-02 20:08:41.443461709 +0000 UTC m=+19.649893375" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 20:08:43.660520868 +0000 UTC m=+21.866952534" watchObservedRunningTime="2023-10-02 20:08:43.660702526 +0000 UTC m=+21.867134192" Oct 2 20:08:44.449721 kubelet[1440]: E1002 20:08:44.449665 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:44.652863 kubelet[1440]: E1002 20:08:44.652801 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:45.450478 kubelet[1440]: E1002 20:08:45.450378 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:46.450806 kubelet[1440]: E1002 20:08:46.450758 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:46.570214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56289032.mount: Deactivated successfully. Oct 2 20:08:47.451229 kubelet[1440]: E1002 20:08:47.451177 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:48.452209 kubelet[1440]: E1002 20:08:48.452164 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:48.867678 env[1140]: time="2023-10-02T20:08:48.867633188Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:48.868928 env[1140]: time="2023-10-02T20:08:48.868886170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:48.870387 env[1140]: time="2023-10-02T20:08:48.870360957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:08:48.871539 env[1140]: time="2023-10-02T20:08:48.871504178Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 20:08:48.873592 env[1140]: time="2023-10-02T20:08:48.873559415Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:08:48.882296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690509220.mount: Deactivated successfully. Oct 2 20:08:48.887479 env[1140]: time="2023-10-02T20:08:48.887442147Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\"" Oct 2 20:08:48.888233 env[1140]: time="2023-10-02T20:08:48.888193201Z" level=info msg="StartContainer for \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\"" Oct 2 20:08:48.905615 systemd[1]: Started cri-containerd-ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca.scope. Oct 2 20:08:48.925735 systemd[1]: cri-containerd-ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca.scope: Deactivated successfully. Oct 2 20:08:49.104715 env[1140]: time="2023-10-02T20:08:49.104656922Z" level=info msg="shim disconnected" id=ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca Oct 2 20:08:49.104715 env[1140]: time="2023-10-02T20:08:49.104709362Z" level=warning msg="cleaning up after shim disconnected" id=ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca namespace=k8s.io Oct 2 20:08:49.104715 env[1140]: time="2023-10-02T20:08:49.104719483Z" level=info msg="cleaning up dead shim" Oct 2 20:08:49.115727 env[1140]: time="2023-10-02T20:08:49.115682991Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:08:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1787 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:08:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:08:49.116022 env[1140]: time="2023-10-02T20:08:49.115928955Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 20:08:49.116258 env[1140]: time="2023-10-02T20:08:49.116200360Z" level=error msg="Failed to pipe stdout of container \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\"" error="reading from a closed fifo" Oct 2 20:08:49.116324 env[1140]: time="2023-10-02T20:08:49.116239760Z" level=error msg="Failed to pipe stderr of container \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\"" error="reading from a closed fifo" Oct 2 20:08:49.117892 env[1140]: time="2023-10-02T20:08:49.117773266Z" level=error msg="StartContainer for \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:08:49.118112 kubelet[1440]: E1002 20:08:49.118078 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca" Oct 2 20:08:49.118444 kubelet[1440]: E1002 20:08:49.118209 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:08:49.118444 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:08:49.118444 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:08:49.118444 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sjtjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:08:49.118637 kubelet[1440]: E1002 20:08:49.118263 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:08:49.453289 kubelet[1440]: E1002 20:08:49.453156 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:49.663786 kubelet[1440]: E1002 20:08:49.663505 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:49.665338 env[1140]: time="2023-10-02T20:08:49.665297934Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:08:49.674964 env[1140]: time="2023-10-02T20:08:49.674922220Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\"" Oct 2 20:08:49.675366 env[1140]: time="2023-10-02T20:08:49.675338747Z" level=info msg="StartContainer for \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\"" Oct 2 20:08:49.690702 systemd[1]: Started cri-containerd-ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee.scope. Oct 2 20:08:49.709530 systemd[1]: cri-containerd-ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee.scope: Deactivated successfully. Oct 2 20:08:49.715549 env[1140]: time="2023-10-02T20:08:49.715501075Z" level=info msg="shim disconnected" id=ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee Oct 2 20:08:49.715693 env[1140]: time="2023-10-02T20:08:49.715550596Z" level=warning msg="cleaning up after shim disconnected" id=ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee namespace=k8s.io Oct 2 20:08:49.715693 env[1140]: time="2023-10-02T20:08:49.715560436Z" level=info msg="cleaning up dead shim" Oct 2 20:08:49.723876 env[1140]: time="2023-10-02T20:08:49.723834418Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:08:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1824 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:08:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:08:49.724122 env[1140]: time="2023-10-02T20:08:49.724072102Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 20:08:49.724346 env[1140]: time="2023-10-02T20:08:49.724311506Z" level=error msg="Failed to pipe stderr of container \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\"" error="reading from a closed fifo" Oct 2 20:08:49.724676 env[1140]: time="2023-10-02T20:08:49.724649512Z" level=error msg="Failed to pipe stdout of container \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\"" error="reading from a closed fifo" Oct 2 20:08:49.726337 env[1140]: time="2023-10-02T20:08:49.726286460Z" level=error msg="StartContainer for \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:08:49.727044 kubelet[1440]: E1002 20:08:49.726556 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee" Oct 2 20:08:49.727044 kubelet[1440]: E1002 20:08:49.726662 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:08:49.727044 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:08:49.727044 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:08:49.727302 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sjtjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:08:49.727464 kubelet[1440]: E1002 20:08:49.726697 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:08:49.879026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca-rootfs.mount: Deactivated successfully. Oct 2 20:08:50.453786 kubelet[1440]: E1002 20:08:50.453745 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:50.665568 kubelet[1440]: I1002 20:08:50.665543 1440 scope.go:115] "RemoveContainer" containerID="ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca" Oct 2 20:08:50.665814 kubelet[1440]: I1002 20:08:50.665800 1440 scope.go:115] "RemoveContainer" containerID="ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca" Oct 2 20:08:50.666792 env[1140]: time="2023-10-02T20:08:50.666755883Z" level=info msg="RemoveContainer for \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\"" Oct 2 20:08:50.667137 env[1140]: time="2023-10-02T20:08:50.667078088Z" level=info msg="RemoveContainer for \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\"" Oct 2 20:08:50.667201 env[1140]: time="2023-10-02T20:08:50.667168130Z" level=error msg="RemoveContainer for \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\" failed" error="failed to set removing state for container \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\": container is already in removing state" Oct 2 20:08:50.667316 kubelet[1440]: E1002 20:08:50.667301 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\": container is already in removing state" containerID="ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca" Oct 2 20:08:50.667367 kubelet[1440]: E1002 20:08:50.667348 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca": container is already in removing state; Skipping pod "cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)" Oct 2 20:08:50.667434 kubelet[1440]: E1002 20:08:50.667403 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:50.667619 kubelet[1440]: E1002 20:08:50.667606 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:08:50.669162 env[1140]: time="2023-10-02T20:08:50.669130842Z" level=info msg="RemoveContainer for \"ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca\" returns successfully" Oct 2 20:08:51.454499 kubelet[1440]: E1002 20:08:51.454451 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:51.668597 kubelet[1440]: E1002 20:08:51.668558 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:08:51.668785 kubelet[1440]: E1002 20:08:51.668768 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:08:52.207628 kubelet[1440]: W1002 20:08:52.207568 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice/cri-containerd-ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca.scope WatchSource:0}: container "ed0853f104bd2db1a096d2bf8f63a8dd083cc5691f7ed13f90b7bc998bd1d7ca" in namespace "k8s.io": not found Oct 2 20:08:52.455208 kubelet[1440]: E1002 20:08:52.455164 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:53.456111 kubelet[1440]: E1002 20:08:53.456085 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:54.456923 kubelet[1440]: E1002 20:08:54.456846 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:55.313335 kubelet[1440]: W1002 20:08:55.313290 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice/cri-containerd-ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee.scope WatchSource:0}: task ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee not found: not found Oct 2 20:08:55.457244 kubelet[1440]: E1002 20:08:55.457199 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:56.458283 kubelet[1440]: E1002 20:08:56.458248 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:57.459075 kubelet[1440]: E1002 20:08:57.459035 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:58.459650 kubelet[1440]: E1002 20:08:58.459608 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:59.460543 kubelet[1440]: E1002 20:08:59.460497 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:08:59.972072 update_engine[1130]: I1002 20:08:59.971666 1130 update_attempter.cc:505] Updating boot flags... Oct 2 20:09:00.461459 kubelet[1440]: E1002 20:09:00.461354 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:01.462924 kubelet[1440]: E1002 20:09:01.462880 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:02.463583 kubelet[1440]: E1002 20:09:02.463539 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:03.437854 kubelet[1440]: E1002 20:09:03.437810 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:03.464483 kubelet[1440]: E1002 20:09:03.464459 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:04.465193 kubelet[1440]: E1002 20:09:04.465142 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:04.620320 kubelet[1440]: E1002 20:09:04.620290 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:04.622186 env[1140]: time="2023-10-02T20:09:04.622148591Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:09:04.633923 env[1140]: time="2023-10-02T20:09:04.633877522Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\"" Oct 2 20:09:04.634346 env[1140]: time="2023-10-02T20:09:04.634315605Z" level=info msg="StartContainer for \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\"" Oct 2 20:09:04.652881 systemd[1]: Started cri-containerd-e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a.scope. Oct 2 20:09:04.668520 systemd[1]: cri-containerd-e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a.scope: Deactivated successfully. Oct 2 20:09:04.675276 env[1140]: time="2023-10-02T20:09:04.675213843Z" level=info msg="shim disconnected" id=e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a Oct 2 20:09:04.675276 env[1140]: time="2023-10-02T20:09:04.675275923Z" level=warning msg="cleaning up after shim disconnected" id=e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a namespace=k8s.io Oct 2 20:09:04.675450 env[1140]: time="2023-10-02T20:09:04.675287483Z" level=info msg="cleaning up dead shim" Oct 2 20:09:04.683807 env[1140]: time="2023-10-02T20:09:04.683747909Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:09:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1878 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:09:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:09:04.684311 env[1140]: time="2023-10-02T20:09:04.684215953Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 20:09:04.684518 env[1140]: time="2023-10-02T20:09:04.684479875Z" level=error msg="Failed to pipe stdout of container \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\"" error="reading from a closed fifo" Oct 2 20:09:04.684610 env[1140]: time="2023-10-02T20:09:04.684506955Z" level=error msg="Failed to pipe stderr of container \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\"" error="reading from a closed fifo" Oct 2 20:09:04.686076 env[1140]: time="2023-10-02T20:09:04.686041407Z" level=error msg="StartContainer for \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:09:04.686181 kubelet[1440]: E1002 20:09:04.686158 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a" Oct 2 20:09:04.686278 kubelet[1440]: E1002 20:09:04.686253 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:09:04.686278 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:09:04.686278 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:09:04.686278 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sjtjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:09:04.686419 kubelet[1440]: E1002 20:09:04.686299 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:05.466280 kubelet[1440]: E1002 20:09:05.466239 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:05.628066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a-rootfs.mount: Deactivated successfully. Oct 2 20:09:05.690505 kubelet[1440]: I1002 20:09:05.690204 1440 scope.go:115] "RemoveContainer" containerID="ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee" Oct 2 20:09:05.690505 kubelet[1440]: I1002 20:09:05.690477 1440 scope.go:115] "RemoveContainer" containerID="ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee" Oct 2 20:09:05.691352 env[1140]: time="2023-10-02T20:09:05.691318016Z" level=info msg="RemoveContainer for \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\"" Oct 2 20:09:05.691702 env[1140]: time="2023-10-02T20:09:05.691670459Z" level=info msg="RemoveContainer for \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\"" Oct 2 20:09:05.691853 env[1140]: time="2023-10-02T20:09:05.691810660Z" level=error msg="RemoveContainer for \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\" failed" error="failed to set removing state for container \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\": container is already in removing state" Oct 2 20:09:05.692085 kubelet[1440]: E1002 20:09:05.692066 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\": container is already in removing state" containerID="ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee" Oct 2 20:09:05.692154 kubelet[1440]: I1002 20:09:05.692111 1440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee} err="rpc error: code = Unknown desc = failed to set removing state for container \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\": container is already in removing state" Oct 2 20:09:05.694739 env[1140]: time="2023-10-02T20:09:05.694708281Z" level=info msg="RemoveContainer for \"ed28b7e749cce989d39a315c386e00eeb29badedd94ced6c00a9bf11ad2b30ee\" returns successfully" Oct 2 20:09:05.695040 kubelet[1440]: E1002 20:09:05.695016 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:05.695376 kubelet[1440]: E1002 20:09:05.695283 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:06.467176 kubelet[1440]: E1002 20:09:06.467132 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:07.467524 kubelet[1440]: E1002 20:09:07.467464 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:07.780361 kubelet[1440]: W1002 20:09:07.780115 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice/cri-containerd-e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a.scope WatchSource:0}: task e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a not found: not found Oct 2 20:09:08.468526 kubelet[1440]: E1002 20:09:08.468449 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:09.469063 kubelet[1440]: E1002 20:09:09.468999 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:10.469645 kubelet[1440]: E1002 20:09:10.469601 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:11.470325 kubelet[1440]: E1002 20:09:11.470262 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:12.471061 kubelet[1440]: E1002 20:09:12.471026 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:13.471397 kubelet[1440]: E1002 20:09:13.471359 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:14.471837 kubelet[1440]: E1002 20:09:14.471792 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:15.472375 kubelet[1440]: E1002 20:09:15.472326 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:16.472937 kubelet[1440]: E1002 20:09:16.472892 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:17.473087 kubelet[1440]: E1002 20:09:17.473010 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:17.621446 kubelet[1440]: E1002 20:09:17.621417 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:17.621856 kubelet[1440]: E1002 20:09:17.621838 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:18.473851 kubelet[1440]: E1002 20:09:18.473825 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:19.475321 kubelet[1440]: E1002 20:09:19.474815 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:20.475303 kubelet[1440]: E1002 20:09:20.475240 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:21.476012 kubelet[1440]: E1002 20:09:21.475970 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:22.476858 kubelet[1440]: E1002 20:09:22.476802 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:23.435352 kubelet[1440]: E1002 20:09:23.435292 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:23.476924 kubelet[1440]: E1002 20:09:23.476878 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:24.477048 kubelet[1440]: E1002 20:09:24.477010 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:25.478267 kubelet[1440]: E1002 20:09:25.478229 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:26.479102 kubelet[1440]: E1002 20:09:26.479049 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:27.479625 kubelet[1440]: E1002 20:09:27.479569 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:28.480318 kubelet[1440]: E1002 20:09:28.480211 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:29.481118 kubelet[1440]: E1002 20:09:29.481083 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:30.481995 kubelet[1440]: E1002 20:09:30.481948 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:31.482277 kubelet[1440]: E1002 20:09:31.482251 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:32.483820 kubelet[1440]: E1002 20:09:32.483766 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:32.620127 kubelet[1440]: E1002 20:09:32.620094 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:32.622067 env[1140]: time="2023-10-02T20:09:32.622015756Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:09:32.631586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806368338.mount: Deactivated successfully. Oct 2 20:09:32.634439 env[1140]: time="2023-10-02T20:09:32.634400433Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\"" Oct 2 20:09:32.635039 env[1140]: time="2023-10-02T20:09:32.634997995Z" level=info msg="StartContainer for \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\"" Oct 2 20:09:32.653796 systemd[1]: Started cri-containerd-06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd.scope. Oct 2 20:09:32.673382 systemd[1]: cri-containerd-06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd.scope: Deactivated successfully. Oct 2 20:09:32.679701 env[1140]: time="2023-10-02T20:09:32.679642287Z" level=info msg="shim disconnected" id=06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd Oct 2 20:09:32.679701 env[1140]: time="2023-10-02T20:09:32.679697287Z" level=warning msg="cleaning up after shim disconnected" id=06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd namespace=k8s.io Oct 2 20:09:32.679882 env[1140]: time="2023-10-02T20:09:32.679707487Z" level=info msg="cleaning up dead shim" Oct 2 20:09:32.688460 env[1140]: time="2023-10-02T20:09:32.688411753Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:09:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1918 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:09:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:09:32.688719 env[1140]: time="2023-10-02T20:09:32.688668233Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:09:32.688902 env[1140]: time="2023-10-02T20:09:32.688853514Z" level=error msg="Failed to pipe stdout of container \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\"" error="reading from a closed fifo" Oct 2 20:09:32.688947 env[1140]: time="2023-10-02T20:09:32.688864794Z" level=error msg="Failed to pipe stderr of container \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\"" error="reading from a closed fifo" Oct 2 20:09:32.690444 env[1140]: time="2023-10-02T20:09:32.690380038Z" level=error msg="StartContainer for \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:09:32.690702 kubelet[1440]: E1002 20:09:32.690681 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd" Oct 2 20:09:32.690914 kubelet[1440]: E1002 20:09:32.690898 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:09:32.690914 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:09:32.690914 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:09:32.690914 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sjtjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:09:32.691313 kubelet[1440]: E1002 20:09:32.691287 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:32.732974 kubelet[1440]: I1002 20:09:32.732951 1440 scope.go:115] "RemoveContainer" containerID="e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a" Oct 2 20:09:32.733425 kubelet[1440]: I1002 20:09:32.733386 1440 scope.go:115] "RemoveContainer" containerID="e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a" Oct 2 20:09:32.735287 env[1140]: time="2023-10-02T20:09:32.734474529Z" level=info msg="RemoveContainer for \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\"" Oct 2 20:09:32.735728 env[1140]: time="2023-10-02T20:09:32.735699013Z" level=info msg="RemoveContainer for \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\"" Oct 2 20:09:32.735855 env[1140]: time="2023-10-02T20:09:32.735826293Z" level=error msg="RemoveContainer for \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\" failed" error="failed to set removing state for container \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\": container is already in removing state" Oct 2 20:09:32.735988 kubelet[1440]: E1002 20:09:32.735970 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\": container is already in removing state" containerID="e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a" Oct 2 20:09:32.736036 kubelet[1440]: E1002 20:09:32.735999 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a": container is already in removing state; Skipping pod "cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)" Oct 2 20:09:32.736073 kubelet[1440]: E1002 20:09:32.736060 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:32.736295 kubelet[1440]: E1002 20:09:32.736274 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:32.736909 env[1140]: time="2023-10-02T20:09:32.736879056Z" level=info msg="RemoveContainer for \"e51c25a3b76752ac1019035c7d84d141e7f45f30936182222e0d2fb3ce26956a\" returns successfully" Oct 2 20:09:33.484320 kubelet[1440]: E1002 20:09:33.484259 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:33.629440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd-rootfs.mount: Deactivated successfully. Oct 2 20:09:34.484821 kubelet[1440]: E1002 20:09:34.484759 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:35.485053 kubelet[1440]: E1002 20:09:35.485023 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:35.784244 kubelet[1440]: W1002 20:09:35.784125 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice/cri-containerd-06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd.scope WatchSource:0}: task 06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd not found: not found Oct 2 20:09:36.486366 kubelet[1440]: E1002 20:09:36.486337 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:37.487325 kubelet[1440]: E1002 20:09:37.487253 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:38.488320 kubelet[1440]: E1002 20:09:38.488280 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:39.489358 kubelet[1440]: E1002 20:09:39.489321 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:40.490368 kubelet[1440]: E1002 20:09:40.490327 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:41.491195 kubelet[1440]: E1002 20:09:41.491148 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:42.491732 kubelet[1440]: E1002 20:09:42.491680 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:43.435868 kubelet[1440]: E1002 20:09:43.435827 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:43.492230 kubelet[1440]: E1002 20:09:43.492202 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:44.492826 kubelet[1440]: E1002 20:09:44.492754 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:45.493468 kubelet[1440]: E1002 20:09:45.493429 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:45.620652 kubelet[1440]: E1002 20:09:45.620611 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:45.620777 kubelet[1440]: E1002 20:09:45.620673 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:45.620832 kubelet[1440]: E1002 20:09:45.620812 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:46.493565 kubelet[1440]: E1002 20:09:46.493533 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:47.494535 kubelet[1440]: E1002 20:09:47.494494 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:48.495848 kubelet[1440]: E1002 20:09:48.495781 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:49.496363 kubelet[1440]: E1002 20:09:49.496320 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:50.497023 kubelet[1440]: E1002 20:09:50.496979 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:51.497667 kubelet[1440]: E1002 20:09:51.497635 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:52.498895 kubelet[1440]: E1002 20:09:52.498844 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:53.500023 kubelet[1440]: E1002 20:09:53.499977 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:54.500372 kubelet[1440]: E1002 20:09:54.500342 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:55.501744 kubelet[1440]: E1002 20:09:55.501711 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:56.502647 kubelet[1440]: E1002 20:09:56.502610 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:57.503893 kubelet[1440]: E1002 20:09:57.503857 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:57.620418 kubelet[1440]: E1002 20:09:57.620386 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:09:57.620616 kubelet[1440]: E1002 20:09:57.620601 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:09:58.504019 kubelet[1440]: E1002 20:09:58.503973 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:09:59.505029 kubelet[1440]: E1002 20:09:59.504999 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:00.505493 kubelet[1440]: E1002 20:10:00.505463 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:01.505859 kubelet[1440]: E1002 20:10:01.505826 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:02.506553 kubelet[1440]: E1002 20:10:02.506511 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:03.435479 kubelet[1440]: E1002 20:10:03.435420 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:03.507583 kubelet[1440]: E1002 20:10:03.507552 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:04.508484 kubelet[1440]: E1002 20:10:04.508448 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:05.509240 kubelet[1440]: E1002 20:10:05.509207 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:06.509368 kubelet[1440]: E1002 20:10:06.509332 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:07.510266 kubelet[1440]: E1002 20:10:07.510215 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:08.510324 kubelet[1440]: E1002 20:10:08.510299 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:09.511316 kubelet[1440]: E1002 20:10:09.511275 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:10.512156 kubelet[1440]: E1002 20:10:10.512121 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:11.512501 kubelet[1440]: E1002 20:10:11.512452 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:11.620669 kubelet[1440]: E1002 20:10:11.620635 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:10:11.620862 kubelet[1440]: E1002 20:10:11.620843 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:10:12.513335 kubelet[1440]: E1002 20:10:12.513296 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:13.514068 kubelet[1440]: E1002 20:10:13.514045 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:14.515313 kubelet[1440]: E1002 20:10:14.515274 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:15.516112 kubelet[1440]: E1002 20:10:15.516052 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:16.516959 kubelet[1440]: E1002 20:10:16.516875 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:17.517570 kubelet[1440]: E1002 20:10:17.517510 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:18.517817 kubelet[1440]: E1002 20:10:18.517792 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:19.519186 kubelet[1440]: E1002 20:10:19.519156 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:20.519788 kubelet[1440]: E1002 20:10:20.519764 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:21.520274 kubelet[1440]: E1002 20:10:21.520211 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:22.520831 kubelet[1440]: E1002 20:10:22.520766 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:23.435402 kubelet[1440]: E1002 20:10:23.435352 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:23.471420 kubelet[1440]: E1002 20:10:23.471379 1440 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 20:10:23.510241 kubelet[1440]: E1002 20:10:23.510206 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:23.521274 kubelet[1440]: E1002 20:10:23.521241 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:24.522416 kubelet[1440]: E1002 20:10:24.522357 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:25.523150 kubelet[1440]: E1002 20:10:25.523109 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:25.620476 kubelet[1440]: E1002 20:10:25.620443 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:10:25.622891 env[1140]: time="2023-10-02T20:10:25.622824838Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 20:10:25.631207 env[1140]: time="2023-10-02T20:10:25.631119229Z" level=info msg="CreateContainer within sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\"" Oct 2 20:10:25.632246 env[1140]: time="2023-10-02T20:10:25.632018197Z" level=info msg="StartContainer for \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\"" Oct 2 20:10:25.646743 systemd[1]: Started cri-containerd-e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4.scope. Oct 2 20:10:25.682011 systemd[1]: cri-containerd-e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4.scope: Deactivated successfully. Oct 2 20:10:25.685532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4-rootfs.mount: Deactivated successfully. Oct 2 20:10:25.693423 env[1140]: time="2023-10-02T20:10:25.693373563Z" level=info msg="shim disconnected" id=e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4 Oct 2 20:10:25.693598 env[1140]: time="2023-10-02T20:10:25.693426603Z" level=warning msg="cleaning up after shim disconnected" id=e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4 namespace=k8s.io Oct 2 20:10:25.693598 env[1140]: time="2023-10-02T20:10:25.693436403Z" level=info msg="cleaning up dead shim" Oct 2 20:10:25.701944 env[1140]: time="2023-10-02T20:10:25.701390632Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:10:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1961 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:10:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:10:25.701944 env[1140]: time="2023-10-02T20:10:25.701656354Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:10:25.701944 env[1140]: time="2023-10-02T20:10:25.701840036Z" level=error msg="Failed to pipe stdout of container \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\"" error="reading from a closed fifo" Oct 2 20:10:25.701944 env[1140]: time="2023-10-02T20:10:25.701849796Z" level=error msg="Failed to pipe stderr of container \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\"" error="reading from a closed fifo" Oct 2 20:10:25.705161 env[1140]: time="2023-10-02T20:10:25.704422978Z" level=error msg="StartContainer for \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:10:25.705588 kubelet[1440]: E1002 20:10:25.705393 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4" Oct 2 20:10:25.705588 kubelet[1440]: E1002 20:10:25.705505 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:10:25.705588 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:10:25.705588 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:10:25.705763 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sjtjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:10:25.705816 kubelet[1440]: E1002 20:10:25.705563 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:10:25.810180 kubelet[1440]: I1002 20:10:25.809215 1440 scope.go:115] "RemoveContainer" containerID="06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd" Oct 2 20:10:25.810180 kubelet[1440]: I1002 20:10:25.809575 1440 scope.go:115] "RemoveContainer" containerID="06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd" Oct 2 20:10:25.811109 env[1140]: time="2023-10-02T20:10:25.811063572Z" level=info msg="RemoveContainer for \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\"" Oct 2 20:10:25.811821 env[1140]: time="2023-10-02T20:10:25.811786498Z" level=info msg="RemoveContainer for \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\"" Oct 2 20:10:25.811907 env[1140]: time="2023-10-02T20:10:25.811878619Z" level=error msg="RemoveContainer for \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\" failed" error="failed to set removing state for container \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\": container is already in removing state" Oct 2 20:10:25.812025 kubelet[1440]: E1002 20:10:25.812008 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\": container is already in removing state" containerID="06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd" Oct 2 20:10:25.812100 kubelet[1440]: E1002 20:10:25.812039 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd": container is already in removing state; Skipping pod "cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)" Oct 2 20:10:25.812100 kubelet[1440]: E1002 20:10:25.812095 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:10:25.812354 kubelet[1440]: E1002 20:10:25.812340 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:10:25.813672 env[1140]: time="2023-10-02T20:10:25.813639634Z" level=info msg="RemoveContainer for \"06e1ef0650a14b7a5bc783f762eb56047aeeb4b76c551a147f1ef2a31aefc3dd\" returns successfully" Oct 2 20:10:26.524294 kubelet[1440]: E1002 20:10:26.524255 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:27.525424 kubelet[1440]: E1002 20:10:27.525374 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:28.510999 kubelet[1440]: E1002 20:10:28.510960 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:28.526235 kubelet[1440]: E1002 20:10:28.526189 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:28.797419 kubelet[1440]: W1002 20:10:28.797071 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice/cri-containerd-e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4.scope WatchSource:0}: task e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4 not found: not found Oct 2 20:10:29.526746 kubelet[1440]: E1002 20:10:29.526693 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:30.526856 kubelet[1440]: E1002 20:10:30.526802 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:31.527700 kubelet[1440]: E1002 20:10:31.527643 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:32.528233 kubelet[1440]: E1002 20:10:32.528187 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:33.512408 kubelet[1440]: E1002 20:10:33.512369 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:33.528612 kubelet[1440]: E1002 20:10:33.528584 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:34.529678 kubelet[1440]: E1002 20:10:34.529606 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:35.530000 kubelet[1440]: E1002 20:10:35.529929 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:36.530243 kubelet[1440]: E1002 20:10:36.530186 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:37.530844 kubelet[1440]: E1002 20:10:37.530767 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:38.513843 kubelet[1440]: E1002 20:10:38.513805 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:38.531098 kubelet[1440]: E1002 20:10:38.531070 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:39.531882 kubelet[1440]: E1002 20:10:39.531820 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:39.620544 kubelet[1440]: E1002 20:10:39.620500 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:10:39.620742 kubelet[1440]: E1002 20:10:39.620721 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:10:40.532431 kubelet[1440]: E1002 20:10:40.532377 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:41.533265 kubelet[1440]: E1002 20:10:41.533208 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:42.534187 kubelet[1440]: E1002 20:10:42.534144 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:43.435476 kubelet[1440]: E1002 20:10:43.435420 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:43.515170 kubelet[1440]: E1002 20:10:43.515150 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:43.534570 kubelet[1440]: E1002 20:10:43.534546 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:44.535670 kubelet[1440]: E1002 20:10:44.535623 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:45.535866 kubelet[1440]: E1002 20:10:45.535827 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:46.536257 kubelet[1440]: E1002 20:10:46.536199 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:47.537161 kubelet[1440]: E1002 20:10:47.537117 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:48.515835 kubelet[1440]: E1002 20:10:48.515795 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:48.538049 kubelet[1440]: E1002 20:10:48.538012 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:49.538963 kubelet[1440]: E1002 20:10:49.538917 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:50.541029 kubelet[1440]: E1002 20:10:50.540414 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:51.540723 kubelet[1440]: E1002 20:10:51.540667 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:51.620943 kubelet[1440]: E1002 20:10:51.620906 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:10:51.621289 kubelet[1440]: E1002 20:10:51.621114 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:10:52.541473 kubelet[1440]: E1002 20:10:52.541434 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:53.516803 kubelet[1440]: E1002 20:10:53.516767 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:53.542302 kubelet[1440]: E1002 20:10:53.542256 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:54.543206 kubelet[1440]: E1002 20:10:54.543146 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:55.543825 kubelet[1440]: E1002 20:10:55.543781 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:56.544569 kubelet[1440]: E1002 20:10:56.544521 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:57.545424 kubelet[1440]: E1002 20:10:57.545382 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:58.517592 kubelet[1440]: E1002 20:10:58.517567 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:10:58.546900 kubelet[1440]: E1002 20:10:58.546866 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:10:59.547624 kubelet[1440]: E1002 20:10:59.547580 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:00.547855 kubelet[1440]: E1002 20:11:00.547819 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:01.549191 kubelet[1440]: E1002 20:11:01.549150 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:02.550029 kubelet[1440]: E1002 20:11:02.549981 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:03.435921 kubelet[1440]: E1002 20:11:03.435869 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:03.519022 kubelet[1440]: E1002 20:11:03.518980 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:03.550198 kubelet[1440]: E1002 20:11:03.550173 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:04.550481 kubelet[1440]: E1002 20:11:04.550434 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:05.551233 kubelet[1440]: E1002 20:11:05.551175 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:06.552029 kubelet[1440]: E1002 20:11:06.551984 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:06.620472 kubelet[1440]: E1002 20:11:06.620440 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:06.620721 kubelet[1440]: E1002 20:11:06.620696 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:11:07.552968 kubelet[1440]: E1002 20:11:07.552923 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:07.620210 kubelet[1440]: E1002 20:11:07.620173 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:08.520383 kubelet[1440]: E1002 20:11:08.520347 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:08.553829 kubelet[1440]: E1002 20:11:08.553800 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:09.554319 kubelet[1440]: E1002 20:11:09.554277 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:10.554676 kubelet[1440]: E1002 20:11:10.554638 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:11.555303 kubelet[1440]: E1002 20:11:11.555268 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:12.556686 kubelet[1440]: E1002 20:11:12.556632 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:13.521775 kubelet[1440]: E1002 20:11:13.521750 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:13.557383 kubelet[1440]: E1002 20:11:13.557361 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:14.558833 kubelet[1440]: E1002 20:11:14.558642 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:15.559647 kubelet[1440]: E1002 20:11:15.559616 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:16.560530 kubelet[1440]: E1002 20:11:16.560496 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:17.561841 kubelet[1440]: E1002 20:11:17.561804 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:18.522906 kubelet[1440]: E1002 20:11:18.522863 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:18.563085 kubelet[1440]: E1002 20:11:18.563062 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:18.620267 kubelet[1440]: E1002 20:11:18.620241 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:18.620610 kubelet[1440]: E1002 20:11:18.620593 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:11:19.563641 kubelet[1440]: E1002 20:11:19.563608 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:20.564735 kubelet[1440]: E1002 20:11:20.564699 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:21.565328 kubelet[1440]: E1002 20:11:21.565286 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:22.565931 kubelet[1440]: E1002 20:11:22.565889 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:23.435374 kubelet[1440]: E1002 20:11:23.435328 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:23.524363 kubelet[1440]: E1002 20:11:23.524341 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:23.566698 kubelet[1440]: E1002 20:11:23.566679 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:24.568026 kubelet[1440]: E1002 20:11:24.567993 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:25.568505 kubelet[1440]: E1002 20:11:25.568465 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:26.568968 kubelet[1440]: E1002 20:11:26.568920 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:27.569063 kubelet[1440]: E1002 20:11:27.569010 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:28.525723 kubelet[1440]: E1002 20:11:28.525691 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:28.570013 kubelet[1440]: E1002 20:11:28.569978 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:29.570139 kubelet[1440]: E1002 20:11:29.570077 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:29.620147 kubelet[1440]: E1002 20:11:29.620126 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:29.620541 kubelet[1440]: E1002 20:11:29.620528 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-97rjw_kube-system(6e084e2d-c888-44cb-a7cc-b0a905c1c524)\"" pod="kube-system/cilium-97rjw" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 Oct 2 20:11:30.570397 kubelet[1440]: E1002 20:11:30.570359 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:31.571507 kubelet[1440]: E1002 20:11:31.571440 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:32.572347 kubelet[1440]: E1002 20:11:32.572283 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:33.526930 kubelet[1440]: E1002 20:11:33.526896 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:33.573366 kubelet[1440]: E1002 20:11:33.573333 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:34.573658 kubelet[1440]: E1002 20:11:34.573623 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:35.574474 kubelet[1440]: E1002 20:11:35.574438 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:36.575543 kubelet[1440]: E1002 20:11:36.575510 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:37.576509 kubelet[1440]: E1002 20:11:37.576472 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:38.528219 kubelet[1440]: E1002 20:11:38.528191 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:38.577521 kubelet[1440]: E1002 20:11:38.577488 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:39.577670 kubelet[1440]: E1002 20:11:39.577607 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:40.578407 kubelet[1440]: E1002 20:11:40.578361 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:41.579397 kubelet[1440]: E1002 20:11:41.579359 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:42.580153 kubelet[1440]: E1002 20:11:42.580104 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:43.097865 env[1140]: time="2023-10-02T20:11:43.097825026Z" level=info msg="StopPodSandbox for \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\"" Oct 2 20:11:43.099457 env[1140]: time="2023-10-02T20:11:43.097891386Z" level=info msg="Container to stop \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:11:43.099040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb-shm.mount: Deactivated successfully. Oct 2 20:11:43.106302 systemd[1]: cri-containerd-33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb.scope: Deactivated successfully. Oct 2 20:11:43.105000 audit: BPF prog-id=64 op=UNLOAD Oct 2 20:11:43.107721 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 20:11:43.107799 kernel: audit: type=1334 audit(1696277503.105:659): prog-id=64 op=UNLOAD Oct 2 20:11:43.112000 audit: BPF prog-id=71 op=UNLOAD Oct 2 20:11:43.115262 kernel: audit: type=1334 audit(1696277503.112:660): prog-id=71 op=UNLOAD Oct 2 20:11:43.125103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb-rootfs.mount: Deactivated successfully. Oct 2 20:11:43.126163 env[1140]: time="2023-10-02T20:11:43.126106823Z" level=info msg="shim disconnected" id=33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb Oct 2 20:11:43.126163 env[1140]: time="2023-10-02T20:11:43.126158224Z" level=warning msg="cleaning up after shim disconnected" id=33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb namespace=k8s.io Oct 2 20:11:43.126163 env[1140]: time="2023-10-02T20:11:43.126167104Z" level=info msg="cleaning up dead shim" Oct 2 20:11:43.134857 env[1140]: time="2023-10-02T20:11:43.134804499Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:11:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1999 runtime=io.containerd.runc.v2\n" Oct 2 20:11:43.135165 env[1140]: time="2023-10-02T20:11:43.135128021Z" level=info msg="TearDown network for sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" successfully" Oct 2 20:11:43.135165 env[1140]: time="2023-10-02T20:11:43.135155861Z" level=info msg="StopPodSandbox for \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" returns successfully" Oct 2 20:11:43.202236 kubelet[1440]: I1002 20:11:43.202185 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cni-path\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202236 kubelet[1440]: I1002 20:11:43.202241 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-xtables-lock\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202420 kubelet[1440]: I1002 20:11:43.202270 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hubble-tls\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202420 kubelet[1440]: I1002 20:11:43.202288 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hostproc\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202420 kubelet[1440]: I1002 20:11:43.202308 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-etc-cni-netd\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202420 kubelet[1440]: I1002 20:11:43.202328 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-kernel\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202420 kubelet[1440]: I1002 20:11:43.202319 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cni-path" (OuterVolumeSpecName: "cni-path") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202420 kubelet[1440]: I1002 20:11:43.202349 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-config-path\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202566 kubelet[1440]: I1002 20:11:43.202370 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtjx\" (UniqueName: \"kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-kube-api-access-sjtjx\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202566 kubelet[1440]: I1002 20:11:43.202374 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hostproc" (OuterVolumeSpecName: "hostproc") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202566 kubelet[1440]: I1002 20:11:43.202388 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-lib-modules\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202566 kubelet[1440]: I1002 20:11:43.202392 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202566 kubelet[1440]: I1002 20:11:43.202407 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-run\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202566 kubelet[1440]: I1002 20:11:43.202424 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-bpf-maps\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202444 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-cgroup\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202464 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e084e2d-c888-44cb-a7cc-b0a905c1c524-clustermesh-secrets\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202481 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-net\") pod \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\" (UID: \"6e084e2d-c888-44cb-a7cc-b0a905c1c524\") " Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202503 1440 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202516 1440 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202525 1440 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.202719 kubelet[1440]: I1002 20:11:43.202553 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202871 kubelet[1440]: I1002 20:11:43.202573 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202871 kubelet[1440]: I1002 20:11:43.202589 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202871 kubelet[1440]: W1002 20:11:43.202715 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6e084e2d-c888-44cb-a7cc-b0a905c1c524/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:11:43.202871 kubelet[1440]: I1002 20:11:43.202733 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202871 kubelet[1440]: I1002 20:11:43.202763 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.202992 kubelet[1440]: I1002 20:11:43.202967 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.203157 kubelet[1440]: I1002 20:11:43.203123 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:11:43.204476 kubelet[1440]: I1002 20:11:43.204441 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:11:43.206259 systemd[1]: var-lib-kubelet-pods-6e084e2d\x2dc888\x2d44cb\x2da7cc\x2db0a905c1c524-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjtjx.mount: Deactivated successfully. Oct 2 20:11:43.207551 kubelet[1440]: I1002 20:11:43.206869 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:11:43.207551 kubelet[1440]: I1002 20:11:43.207378 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-kube-api-access-sjtjx" (OuterVolumeSpecName: "kube-api-access-sjtjx") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "kube-api-access-sjtjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:11:43.207551 kubelet[1440]: I1002 20:11:43.207499 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e084e2d-c888-44cb-a7cc-b0a905c1c524-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6e084e2d-c888-44cb-a7cc-b0a905c1c524" (UID: "6e084e2d-c888-44cb-a7cc-b0a905c1c524"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:11:43.206365 systemd[1]: var-lib-kubelet-pods-6e084e2d\x2dc888\x2d44cb\x2da7cc\x2db0a905c1c524-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:11:43.207754 systemd[1]: var-lib-kubelet-pods-6e084e2d\x2dc888\x2d44cb\x2da7cc\x2db0a905c1c524-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:11:43.303015 kubelet[1440]: I1002 20:11:43.302985 1440 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e084e2d-c888-44cb-a7cc-b0a905c1c524-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303161 kubelet[1440]: I1002 20:11:43.303150 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303267 kubelet[1440]: I1002 20:11:43.303257 1440 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303332 kubelet[1440]: I1002 20:11:43.303324 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303400 kubelet[1440]: I1002 20:11:43.303392 1440 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303465 kubelet[1440]: I1002 20:11:43.303456 1440 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-sjtjx\" (UniqueName: \"kubernetes.io/projected/6e084e2d-c888-44cb-a7cc-b0a905c1c524-kube-api-access-sjtjx\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303525 kubelet[1440]: I1002 20:11:43.303516 1440 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303585 kubelet[1440]: I1002 20:11:43.303576 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303640 kubelet[1440]: I1002 20:11:43.303632 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303708 kubelet[1440]: I1002 20:11:43.303700 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.303764 kubelet[1440]: I1002 20:11:43.303757 1440 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e084e2d-c888-44cb-a7cc-b0a905c1c524-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:11:43.435802 kubelet[1440]: E1002 20:11:43.435696 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:43.528674 kubelet[1440]: E1002 20:11:43.528643 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:43.580940 kubelet[1440]: E1002 20:11:43.580902 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:43.624482 systemd[1]: Removed slice kubepods-burstable-pod6e084e2d_c888_44cb_a7cc_b0a905c1c524.slice. Oct 2 20:11:43.929038 kubelet[1440]: I1002 20:11:43.929004 1440 scope.go:115] "RemoveContainer" containerID="e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4" Oct 2 20:11:43.931361 env[1140]: time="2023-10-02T20:11:43.931324282Z" level=info msg="RemoveContainer for \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\"" Oct 2 20:11:43.933315 env[1140]: time="2023-10-02T20:11:43.933287850Z" level=info msg="RemoveContainer for \"e2e411525a20bca018b9fd949c736c7124b7dcba4cfe374d66d92c9dbea2b0f4\" returns successfully" Oct 2 20:11:44.582024 kubelet[1440]: E1002 20:11:44.581964 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:45.583116 kubelet[1440]: E1002 20:11:45.583069 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:45.622039 kubelet[1440]: I1002 20:11:45.622013 1440 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6e084e2d-c888-44cb-a7cc-b0a905c1c524 path="/var/lib/kubelet/pods/6e084e2d-c888-44cb-a7cc-b0a905c1c524/volumes" Oct 2 20:11:46.583864 kubelet[1440]: E1002 20:11:46.583823 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:46.695791 kubelet[1440]: I1002 20:11:46.695756 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 20:11:46.695791 kubelet[1440]: E1002 20:11:46.695796 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: E1002 20:11:46.695807 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: E1002 20:11:46.695814 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: E1002 20:11:46.695820 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: I1002 20:11:46.695835 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: I1002 20:11:46.695842 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: I1002 20:11:46.695847 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.695946 kubelet[1440]: I1002 20:11:46.695853 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.700102 systemd[1]: Created slice kubepods-besteffort-poda25e6522_9d2b_4126_8442_b48a50f6cdd8.slice. Oct 2 20:11:46.701512 kubelet[1440]: I1002 20:11:46.700109 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 20:11:46.701512 kubelet[1440]: E1002 20:11:46.700156 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.701512 kubelet[1440]: I1002 20:11:46.700175 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="6e084e2d-c888-44cb-a7cc-b0a905c1c524" containerName="mount-cgroup" Oct 2 20:11:46.704986 systemd[1]: Created slice kubepods-burstable-pod2b9eff24_bfeb_48e7_b6c0_1d2c37b26f9c.slice. Oct 2 20:11:46.821051 kubelet[1440]: I1002 20:11:46.820893 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-etc-cni-netd\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821051 kubelet[1440]: I1002 20:11:46.820942 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-kernel\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821051 kubelet[1440]: I1002 20:11:46.820965 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cni-path\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821051 kubelet[1440]: I1002 20:11:46.820984 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-ipsec-secrets\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821051 kubelet[1440]: I1002 20:11:46.821012 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-net\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821326 kubelet[1440]: I1002 20:11:46.821075 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hubble-tls\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821326 kubelet[1440]: I1002 20:11:46.821131 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-run\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821326 kubelet[1440]: I1002 20:11:46.821156 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-bpf-maps\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821326 kubelet[1440]: I1002 20:11:46.821187 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct56h\" (UniqueName: \"kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-kube-api-access-ct56h\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821326 kubelet[1440]: I1002 20:11:46.821210 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hostproc\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821326 kubelet[1440]: I1002 20:11:46.821246 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-clustermesh-secrets\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821461 kubelet[1440]: I1002 20:11:46.821280 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-config-path\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821461 kubelet[1440]: I1002 20:11:46.821300 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-xtables-lock\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821461 kubelet[1440]: I1002 20:11:46.821322 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a25e6522-9d2b-4126-8442-b48a50f6cdd8-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-2297t\" (UID: \"a25e6522-9d2b-4126-8442-b48a50f6cdd8\") " pod="kube-system/cilium-operator-f59cbd8c6-2297t" Oct 2 20:11:46.821461 kubelet[1440]: I1002 20:11:46.821343 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2kf\" (UniqueName: \"kubernetes.io/projected/a25e6522-9d2b-4126-8442-b48a50f6cdd8-kube-api-access-sw2kf\") pod \"cilium-operator-f59cbd8c6-2297t\" (UID: \"a25e6522-9d2b-4126-8442-b48a50f6cdd8\") " pod="kube-system/cilium-operator-f59cbd8c6-2297t" Oct 2 20:11:46.821461 kubelet[1440]: I1002 20:11:46.821363 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-cgroup\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:46.821566 kubelet[1440]: I1002 20:11:46.821382 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-lib-modules\") pod \"cilium-hrkgh\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " pod="kube-system/cilium-hrkgh" Oct 2 20:11:47.003208 kubelet[1440]: E1002 20:11:47.003097 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:47.003948 env[1140]: time="2023-10-02T20:11:47.003908068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-2297t,Uid:a25e6522-9d2b-4126-8442-b48a50f6cdd8,Namespace:kube-system,Attempt:0,}" Oct 2 20:11:47.015751 env[1140]: time="2023-10-02T20:11:47.015497356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:11:47.015751 env[1140]: time="2023-10-02T20:11:47.015579436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:11:47.015751 env[1140]: time="2023-10-02T20:11:47.015613116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:11:47.015935 env[1140]: time="2023-10-02T20:11:47.015804917Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc pid=2030 runtime=io.containerd.runc.v2 Oct 2 20:11:47.016572 kubelet[1440]: E1002 20:11:47.016540 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:47.016970 env[1140]: time="2023-10-02T20:11:47.016937122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hrkgh,Uid:2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c,Namespace:kube-system,Attempt:0,}" Oct 2 20:11:47.028688 systemd[1]: Started cri-containerd-824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc.scope. Oct 2 20:11:47.029387 env[1140]: time="2023-10-02T20:11:47.028176048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:11:47.029387 env[1140]: time="2023-10-02T20:11:47.028259808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:11:47.029387 env[1140]: time="2023-10-02T20:11:47.028271088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:11:47.029387 env[1140]: time="2023-10-02T20:11:47.028472729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5 pid=2058 runtime=io.containerd.runc.v2 Oct 2 20:11:47.049662 systemd[1]: Started cri-containerd-fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5.scope. Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.065441 kernel: audit: type=1400 audit(1696277507.059:661): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.065552 kernel: audit: type=1400 audit(1696277507.059:662): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.065572 kernel: audit: type=1400 audit(1696277507.059:663): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.070051 kernel: audit: type=1400 audit(1696277507.059:664): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.072525 kernel: audit: type=1400 audit(1696277507.059:665): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.075521 kernel: audit: type=1400 audit(1696277507.059:666): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.075571 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:11:47.075600 kernel: audit: type=1400 audit(1696277507.059:667): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit: BPF prog-id=75 op=LOAD Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000147b38 a2=10 a3=0 items=0 ppid=2030 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343930363439366634323535326539376631623764346464306661 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001475a0 a2=3c a3=0 items=0 ppid=2030 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343930363439366634323535326539376631623764346464306661 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.059000 audit: BPF prog-id=76 op=LOAD Oct 2 20:11:47.059000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001478e0 a2=78 a3=0 items=0 ppid=2030 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343930363439366634323535326539376631623764346464306661 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.061000 audit: BPF prog-id=77 op=LOAD Oct 2 20:11:47.061000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000147670 a2=78 a3=0 items=0 ppid=2030 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343930363439366634323535326539376631623764346464306661 Oct 2 20:11:47.064000 audit: BPF prog-id=77 op=UNLOAD Oct 2 20:11:47.064000 audit: BPF prog-id=76 op=UNLOAD Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { perfmon } for pid=2041 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit[2041]: AVC avc: denied { bpf } for pid=2041 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.064000 audit: BPF prog-id=78 op=LOAD Oct 2 20:11:47.064000 audit[2041]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000147b40 a2=78 a3=0 items=0 ppid=2030 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832343930363439366634323535326539376631623764346464306661 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.071000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.075000 audit: BPF prog-id=79 op=LOAD Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { bpf } for pid=2068 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit: BPF prog-id=81 op=LOAD Oct 2 20:11:47.077000 audit[2068]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001c7670 a2=78 a3=0 items=0 ppid=2058 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662363563646363626438383635323735376238323466386434663964 Oct 2 20:11:47.077000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:11:47.077000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { bpf } for pid=2068 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { bpf } for pid=2068 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { bpf } for pid=2068 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { perfmon } for pid=2068 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { perfmon } for pid=2068 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { perfmon } for pid=2068 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { perfmon } for pid=2068 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { perfmon } for pid=2068 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { bpf } for pid=2068 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit[2068]: AVC avc: denied { bpf } for pid=2068 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:47.077000 audit: BPF prog-id=82 op=LOAD Oct 2 20:11:47.077000 audit[2068]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001c7b40 a2=78 a3=0 items=0 ppid=2058 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:47.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662363563646363626438383635323735376238323466386434663964 Oct 2 20:11:47.091708 env[1140]: time="2023-10-02T20:11:47.091657348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hrkgh,Uid:2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\"" Oct 2 20:11:47.092618 kubelet[1440]: E1002 20:11:47.092179 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:47.093953 env[1140]: time="2023-10-02T20:11:47.093918957Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:11:47.094417 env[1140]: time="2023-10-02T20:11:47.094389479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-2297t,Uid:a25e6522-9d2b-4126-8442-b48a50f6cdd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc\"" Oct 2 20:11:47.095009 kubelet[1440]: E1002 20:11:47.094886 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:47.095604 env[1140]: time="2023-10-02T20:11:47.095567804Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 20:11:47.104103 env[1140]: time="2023-10-02T20:11:47.104057239Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\"" Oct 2 20:11:47.104556 env[1140]: time="2023-10-02T20:11:47.104532601Z" level=info msg="StartContainer for \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\"" Oct 2 20:11:47.119042 systemd[1]: Started cri-containerd-235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e.scope. Oct 2 20:11:47.139004 systemd[1]: cri-containerd-235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e.scope: Deactivated successfully. Oct 2 20:11:47.150852 env[1140]: time="2023-10-02T20:11:47.150803910Z" level=info msg="shim disconnected" id=235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e Oct 2 20:11:47.150852 env[1140]: time="2023-10-02T20:11:47.150853350Z" level=warning msg="cleaning up after shim disconnected" id=235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e namespace=k8s.io Oct 2 20:11:47.151062 env[1140]: time="2023-10-02T20:11:47.150862910Z" level=info msg="cleaning up dead shim" Oct 2 20:11:47.159469 env[1140]: time="2023-10-02T20:11:47.159409465Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:11:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2129 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:11:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:11:47.159738 env[1140]: time="2023-10-02T20:11:47.159675147Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 20:11:47.159953 env[1140]: time="2023-10-02T20:11:47.159913428Z" level=error msg="Failed to pipe stderr of container \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\"" error="reading from a closed fifo" Oct 2 20:11:47.160297 env[1140]: time="2023-10-02T20:11:47.160256869Z" level=error msg="Failed to pipe stdout of container \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\"" error="reading from a closed fifo" Oct 2 20:11:47.162116 env[1140]: time="2023-10-02T20:11:47.162071076Z" level=error msg="StartContainer for \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:11:47.162463 kubelet[1440]: E1002 20:11:47.162436 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e" Oct 2 20:11:47.162569 kubelet[1440]: E1002 20:11:47.162553 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:11:47.162569 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:11:47.162569 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:11:47.162569 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ct56h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:11:47.162728 kubelet[1440]: E1002 20:11:47.162590 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:11:47.584539 kubelet[1440]: E1002 20:11:47.584506 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:47.937082 kubelet[1440]: E1002 20:11:47.936997 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:47.938869 env[1140]: time="2023-10-02T20:11:47.938834980Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:11:47.953684 env[1140]: time="2023-10-02T20:11:47.953634080Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\"" Oct 2 20:11:47.954305 env[1140]: time="2023-10-02T20:11:47.954268523Z" level=info msg="StartContainer for \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\"" Oct 2 20:11:47.978169 systemd[1]: Started cri-containerd-1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e.scope. Oct 2 20:11:47.996621 systemd[1]: cri-containerd-1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e.scope: Deactivated successfully. Oct 2 20:11:48.033073 env[1140]: time="2023-10-02T20:11:48.032022721Z" level=info msg="shim disconnected" id=1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e Oct 2 20:11:48.033491 env[1140]: time="2023-10-02T20:11:48.033460927Z" level=warning msg="cleaning up after shim disconnected" id=1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e namespace=k8s.io Oct 2 20:11:48.033560 env[1140]: time="2023-10-02T20:11:48.033547367Z" level=info msg="cleaning up dead shim" Oct 2 20:11:48.044555 env[1140]: time="2023-10-02T20:11:48.044515012Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:11:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2167 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:11:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:11:48.044941 env[1140]: time="2023-10-02T20:11:48.044886854Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:11:48.045153 env[1140]: time="2023-10-02T20:11:48.045111335Z" level=error msg="Failed to pipe stdout of container \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\"" error="reading from a closed fifo" Oct 2 20:11:48.045234 env[1140]: time="2023-10-02T20:11:48.045130255Z" level=error msg="Failed to pipe stderr of container \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\"" error="reading from a closed fifo" Oct 2 20:11:48.046370 env[1140]: time="2023-10-02T20:11:48.046322300Z" level=error msg="StartContainer for \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:11:48.046752 kubelet[1440]: E1002 20:11:48.046582 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e" Oct 2 20:11:48.046752 kubelet[1440]: E1002 20:11:48.046693 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:11:48.046752 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:11:48.046752 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:11:48.046919 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ct56h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:11:48.046975 kubelet[1440]: E1002 20:11:48.046726 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:11:48.446575 env[1140]: time="2023-10-02T20:11:48.446520495Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:11:48.448363 env[1140]: time="2023-10-02T20:11:48.448331302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:11:48.449672 env[1140]: time="2023-10-02T20:11:48.449634988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:11:48.450260 env[1140]: time="2023-10-02T20:11:48.450217070Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 20:11:48.451989 env[1140]: time="2023-10-02T20:11:48.451961237Z" level=info msg="CreateContainer within sandbox \"824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:11:48.461517 env[1140]: time="2023-10-02T20:11:48.461479036Z" level=info msg="CreateContainer within sandbox \"824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\"" Oct 2 20:11:48.461934 env[1140]: time="2023-10-02T20:11:48.461899318Z" level=info msg="StartContainer for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\"" Oct 2 20:11:48.475970 systemd[1]: Started cri-containerd-399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553.scope. Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.497714 kernel: kauditd_printk_skb: 144 callbacks suppressed Oct 2 20:11:48.497775 kernel: audit: type=1400 audit(1696277508.495:693): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.502271 kernel: audit: type=1400 audit(1696277508.495:694): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.502341 kernel: audit: type=1400 audit(1696277508.495:695): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.506898 kernel: audit: type=1400 audit(1696277508.495:696): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.506940 kernel: audit: type=1400 audit(1696277508.495:697): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.511731 kernel: audit: type=1400 audit(1696277508.495:698): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.511837 kernel: audit: type=1400 audit(1696277508.495:699): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.516384 kernel: audit: type=1400 audit(1696277508.495:700): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.516443 kernel: audit: type=1400 audit(1696277508.495:701): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.496000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.520877 kernel: audit: type=1400 audit(1696277508.496:702): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.496000 audit: BPF prog-id=83 op=LOAD Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=2030 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:48.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396561366661623936393161383631643431376537386464356133 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=2030 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:48.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396561366661623936393161383631643431376537386464356133 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.499000 audit: BPF prog-id=84 op=LOAD Oct 2 20:11:48.499000 audit[2187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=2030 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:48.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396561366661623936393161383631643431376537386464356133 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.501000 audit: BPF prog-id=85 op=LOAD Oct 2 20:11:48.501000 audit[2187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=2030 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:48.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396561366661623936393161383631643431376537386464356133 Oct 2 20:11:48.503000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:11:48.503000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { perfmon } for pid=2187 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit[2187]: AVC avc: denied { bpf } for pid=2187 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:11:48.503000 audit: BPF prog-id=86 op=LOAD Oct 2 20:11:48.503000 audit[2187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=2030 pid=2187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:11:48.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339396561366661623936393161383631643431376537386464356133 Oct 2 20:11:48.529542 kubelet[1440]: E1002 20:11:48.529506 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:48.539027 env[1140]: time="2023-10-02T20:11:48.538987313Z" level=info msg="StartContainer for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" returns successfully" Oct 2 20:11:48.585043 kubelet[1440]: E1002 20:11:48.585002 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:48.586000 audit[2198]: AVC avc: denied { map_create } for pid=2198 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c540,c722 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c540,c722 tclass=bpf permissive=0 Oct 2 20:11:48.586000 audit[2198]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400019b768 a2=48 a3=0 items=0 ppid=2030 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c540,c722 key=(null) Oct 2 20:11:48.586000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:11:48.927340 systemd[1]: run-containerd-runc-k8s.io-1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e-runc.g13sr5.mount: Deactivated successfully. Oct 2 20:11:48.927429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e-rootfs.mount: Deactivated successfully. Oct 2 20:11:48.941103 kubelet[1440]: I1002 20:11:48.941069 1440 scope.go:115] "RemoveContainer" containerID="235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e" Oct 2 20:11:48.941783 kubelet[1440]: I1002 20:11:48.941752 1440 scope.go:115] "RemoveContainer" containerID="235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e" Oct 2 20:11:48.941861 env[1140]: time="2023-10-02T20:11:48.941796799Z" level=info msg="RemoveContainer for \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\"" Oct 2 20:11:48.942881 kubelet[1440]: E1002 20:11:48.942842 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:48.944213 env[1140]: time="2023-10-02T20:11:48.944173289Z" level=info msg="RemoveContainer for \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\"" Oct 2 20:11:48.944746 env[1140]: time="2023-10-02T20:11:48.944717891Z" level=info msg="RemoveContainer for \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\" returns successfully" Oct 2 20:11:48.944839 env[1140]: time="2023-10-02T20:11:48.944782091Z" level=info msg="RemoveContainer for \"235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e\" returns successfully" Oct 2 20:11:48.944946 kubelet[1440]: E1002 20:11:48.944930 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:48.945151 kubelet[1440]: E1002 20:11:48.945139 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:11:48.964302 kubelet[1440]: I1002 20:11:48.964274 1440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-2297t" podStartSLOduration=-9.22337203389054e+09 pod.CreationTimestamp="2023-10-02 20:11:46 +0000 UTC" firstStartedPulling="2023-10-02 20:11:47.095212242 +0000 UTC m=+205.301643908" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 20:11:48.963605808 +0000 UTC m=+207.170037554" watchObservedRunningTime="2023-10-02 20:11:48.964237531 +0000 UTC m=+207.170669197" Oct 2 20:11:49.585617 kubelet[1440]: E1002 20:11:49.585584 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:49.945610 kubelet[1440]: E1002 20:11:49.945297 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:49.945891 kubelet[1440]: E1002 20:11:49.945620 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:11:49.945975 kubelet[1440]: E1002 20:11:49.945957 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:11:50.256362 kubelet[1440]: W1002 20:11:50.255973 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b9eff24_bfeb_48e7_b6c0_1d2c37b26f9c.slice/cri-containerd-235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e.scope WatchSource:0}: container "235d34e987267d4afddb86a286f562f26990e1b7620f242cae12a7e4364b8a6e" in namespace "k8s.io": not found Oct 2 20:11:50.586264 kubelet[1440]: E1002 20:11:50.586219 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:51.587014 kubelet[1440]: E1002 20:11:51.586972 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:52.588036 kubelet[1440]: E1002 20:11:52.587952 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:53.362028 kubelet[1440]: W1002 20:11:53.361970 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b9eff24_bfeb_48e7_b6c0_1d2c37b26f9c.slice/cri-containerd-1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e.scope WatchSource:0}: task 1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e not found: not found Oct 2 20:11:53.530903 kubelet[1440]: E1002 20:11:53.530874 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:53.588693 kubelet[1440]: E1002 20:11:53.588641 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:54.589210 kubelet[1440]: E1002 20:11:54.589136 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:55.589951 kubelet[1440]: E1002 20:11:55.589864 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:56.590744 kubelet[1440]: E1002 20:11:56.590686 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:57.591327 kubelet[1440]: E1002 20:11:57.591282 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:58.532352 kubelet[1440]: E1002 20:11:58.532304 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:11:58.591990 kubelet[1440]: E1002 20:11:58.591939 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:11:59.592840 kubelet[1440]: E1002 20:11:59.592797 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:00.593045 kubelet[1440]: E1002 20:12:00.592987 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:01.593611 kubelet[1440]: E1002 20:12:01.593558 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:02.593894 kubelet[1440]: E1002 20:12:02.593832 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:03.436056 kubelet[1440]: E1002 20:12:03.436014 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:03.533346 kubelet[1440]: E1002 20:12:03.533310 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:03.594767 kubelet[1440]: E1002 20:12:03.594714 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:04.595189 kubelet[1440]: E1002 20:12:04.595120 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:04.620738 kubelet[1440]: E1002 20:12:04.620689 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:04.622627 env[1140]: time="2023-10-02T20:12:04.622591904Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:12:04.634676 env[1140]: time="2023-10-02T20:12:04.634627634Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\"" Oct 2 20:12:04.635976 env[1140]: time="2023-10-02T20:12:04.635943359Z" level=info msg="StartContainer for \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\"" Oct 2 20:12:04.655108 systemd[1]: run-containerd-runc-k8s.io-0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba-runc.4cHU02.mount: Deactivated successfully. Oct 2 20:12:04.657248 systemd[1]: Started cri-containerd-0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba.scope. Oct 2 20:12:04.673374 systemd[1]: cri-containerd-0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba.scope: Deactivated successfully. Oct 2 20:12:04.771304 env[1140]: time="2023-10-02T20:12:04.771240760Z" level=info msg="shim disconnected" id=0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba Oct 2 20:12:04.771304 env[1140]: time="2023-10-02T20:12:04.771292400Z" level=warning msg="cleaning up after shim disconnected" id=0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba namespace=k8s.io Oct 2 20:12:04.771304 env[1140]: time="2023-10-02T20:12:04.771303120Z" level=info msg="cleaning up dead shim" Oct 2 20:12:04.779962 env[1140]: time="2023-10-02T20:12:04.779907916Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:12:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2246 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:12:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:12:04.780213 env[1140]: time="2023-10-02T20:12:04.780151797Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:12:04.780414 env[1140]: time="2023-10-02T20:12:04.780365718Z" level=error msg="Failed to pipe stdout of container \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\"" error="reading from a closed fifo" Oct 2 20:12:04.780555 env[1140]: time="2023-10-02T20:12:04.780483758Z" level=error msg="Failed to pipe stderr of container \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\"" error="reading from a closed fifo" Oct 2 20:12:04.782234 env[1140]: time="2023-10-02T20:12:04.782171685Z" level=error msg="StartContainer for \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:12:04.782430 kubelet[1440]: E1002 20:12:04.782396 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba" Oct 2 20:12:04.782531 kubelet[1440]: E1002 20:12:04.782518 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:12:04.782531 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:12:04.782531 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:12:04.782531 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ct56h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:12:04.782662 kubelet[1440]: E1002 20:12:04.782555 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:12:04.971739 kubelet[1440]: I1002 20:12:04.970152 1440 scope.go:115] "RemoveContainer" containerID="1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e" Oct 2 20:12:04.971739 kubelet[1440]: I1002 20:12:04.970424 1440 scope.go:115] "RemoveContainer" containerID="1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e" Oct 2 20:12:04.971884 env[1140]: time="2023-10-02T20:12:04.971241629Z" level=info msg="RemoveContainer for \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\"" Oct 2 20:12:04.971884 env[1140]: time="2023-10-02T20:12:04.971658551Z" level=info msg="RemoveContainer for \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\"" Oct 2 20:12:04.971884 env[1140]: time="2023-10-02T20:12:04.971837191Z" level=error msg="RemoveContainer for \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\" failed" error="failed to set removing state for container \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\": container is already in removing state" Oct 2 20:12:04.972000 kubelet[1440]: E1002 20:12:04.971959 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\": container is already in removing state" containerID="1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e" Oct 2 20:12:04.972000 kubelet[1440]: E1002 20:12:04.971985 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e": container is already in removing state; Skipping pod "cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)" Oct 2 20:12:04.972056 kubelet[1440]: E1002 20:12:04.972040 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:04.972781 kubelet[1440]: E1002 20:12:04.972267 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:12:04.975265 env[1140]: time="2023-10-02T20:12:04.974353042Z" level=info msg="RemoveContainer for \"1b4fb4a7a111ab714d4312642a685e0482acfb82a6cb2fe77a12da1b3e43992e\" returns successfully" Oct 2 20:12:05.596098 kubelet[1440]: E1002 20:12:05.596049 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:05.631961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba-rootfs.mount: Deactivated successfully. Oct 2 20:12:06.596481 kubelet[1440]: E1002 20:12:06.596419 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:07.597256 kubelet[1440]: E1002 20:12:07.597179 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:07.877419 kubelet[1440]: W1002 20:12:07.877101 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b9eff24_bfeb_48e7_b6c0_1d2c37b26f9c.slice/cri-containerd-0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba.scope WatchSource:0}: task 0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba not found: not found Oct 2 20:12:08.533922 kubelet[1440]: E1002 20:12:08.533887 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:08.597900 kubelet[1440]: E1002 20:12:08.597849 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:09.598054 kubelet[1440]: E1002 20:12:09.598018 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:10.598897 kubelet[1440]: E1002 20:12:10.598850 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:11.599940 kubelet[1440]: E1002 20:12:11.599897 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:12.600757 kubelet[1440]: E1002 20:12:12.600716 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:13.534734 kubelet[1440]: E1002 20:12:13.534688 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:13.601035 kubelet[1440]: E1002 20:12:13.600990 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:14.601981 kubelet[1440]: E1002 20:12:14.601949 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:15.602964 kubelet[1440]: E1002 20:12:15.602918 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:16.604057 kubelet[1440]: E1002 20:12:16.604023 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:17.605495 kubelet[1440]: E1002 20:12:17.605457 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:18.535556 kubelet[1440]: E1002 20:12:18.535526 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:18.606135 kubelet[1440]: E1002 20:12:18.606100 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:18.620056 kubelet[1440]: E1002 20:12:18.620026 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:18.620264 kubelet[1440]: E1002 20:12:18.620250 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:12:19.606751 kubelet[1440]: E1002 20:12:19.606674 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:20.607406 kubelet[1440]: E1002 20:12:20.607357 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:21.608064 kubelet[1440]: E1002 20:12:21.608026 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:21.620851 kubelet[1440]: E1002 20:12:21.620826 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:22.609420 kubelet[1440]: E1002 20:12:22.609376 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:23.435778 kubelet[1440]: E1002 20:12:23.435755 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:23.454711 env[1140]: time="2023-10-02T20:12:23.454668037Z" level=info msg="StopPodSandbox for \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\"" Oct 2 20:12:23.454980 env[1140]: time="2023-10-02T20:12:23.454836558Z" level=info msg="TearDown network for sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" successfully" Oct 2 20:12:23.454980 env[1140]: time="2023-10-02T20:12:23.454873598Z" level=info msg="StopPodSandbox for \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" returns successfully" Oct 2 20:12:23.455411 env[1140]: time="2023-10-02T20:12:23.455375040Z" level=info msg="RemovePodSandbox for \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\"" Oct 2 20:12:23.455459 env[1140]: time="2023-10-02T20:12:23.455415800Z" level=info msg="Forcibly stopping sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\"" Oct 2 20:12:23.455503 env[1140]: time="2023-10-02T20:12:23.455485761Z" level=info msg="TearDown network for sandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" successfully" Oct 2 20:12:23.457850 env[1140]: time="2023-10-02T20:12:23.457822450Z" level=info msg="RemovePodSandbox \"33fcf33019eb97207377e3758ca06b82d777f2313460c7ee51f4e57a9d0ee2cb\" returns successfully" Oct 2 20:12:23.536940 kubelet[1440]: E1002 20:12:23.536915 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:23.609793 kubelet[1440]: E1002 20:12:23.609760 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:24.610208 kubelet[1440]: E1002 20:12:24.610131 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:25.610501 kubelet[1440]: E1002 20:12:25.610463 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:26.611789 kubelet[1440]: E1002 20:12:26.611752 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:27.612077 kubelet[1440]: E1002 20:12:27.612036 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:28.538410 kubelet[1440]: E1002 20:12:28.538367 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:28.612901 kubelet[1440]: E1002 20:12:28.612875 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:29.613839 kubelet[1440]: E1002 20:12:29.613795 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:30.614551 kubelet[1440]: E1002 20:12:30.614516 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:30.620050 kubelet[1440]: E1002 20:12:30.620028 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:30.622410 env[1140]: time="2023-10-02T20:12:30.622374381Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:12:30.630299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419834280.mount: Deactivated successfully. Oct 2 20:12:30.634440 env[1140]: time="2023-10-02T20:12:30.634391629Z" level=info msg="CreateContainer within sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\"" Oct 2 20:12:30.635976 env[1140]: time="2023-10-02T20:12:30.635936755Z" level=info msg="StartContainer for \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\"" Oct 2 20:12:30.651169 systemd[1]: Started cri-containerd-007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50.scope. Oct 2 20:12:30.681944 systemd[1]: cri-containerd-007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50.scope: Deactivated successfully. Oct 2 20:12:30.685143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50-rootfs.mount: Deactivated successfully. Oct 2 20:12:30.690597 env[1140]: time="2023-10-02T20:12:30.690547375Z" level=info msg="shim disconnected" id=007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50 Oct 2 20:12:30.690817 env[1140]: time="2023-10-02T20:12:30.690797296Z" level=warning msg="cleaning up after shim disconnected" id=007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50 namespace=k8s.io Oct 2 20:12:30.690897 env[1140]: time="2023-10-02T20:12:30.690883256Z" level=info msg="cleaning up dead shim" Oct 2 20:12:30.698986 env[1140]: time="2023-10-02T20:12:30.698943649Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:12:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2287 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:12:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:12:30.699405 env[1140]: time="2023-10-02T20:12:30.699346410Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:12:30.703326 env[1140]: time="2023-10-02T20:12:30.703283586Z" level=error msg="Failed to pipe stdout of container \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\"" error="reading from a closed fifo" Oct 2 20:12:30.703394 env[1140]: time="2023-10-02T20:12:30.703308666Z" level=error msg="Failed to pipe stderr of container \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\"" error="reading from a closed fifo" Oct 2 20:12:30.705180 env[1140]: time="2023-10-02T20:12:30.705116634Z" level=error msg="StartContainer for \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:12:30.705505 kubelet[1440]: E1002 20:12:30.705477 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50" Oct 2 20:12:30.705594 kubelet[1440]: E1002 20:12:30.705577 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:12:30.705594 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:12:30.705594 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 20:12:30.705594 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ct56h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:12:30.705739 kubelet[1440]: E1002 20:12:30.705619 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:12:31.013391 kubelet[1440]: I1002 20:12:31.013305 1440 scope.go:115] "RemoveContainer" containerID="0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba" Oct 2 20:12:31.013890 kubelet[1440]: I1002 20:12:31.013869 1440 scope.go:115] "RemoveContainer" containerID="0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba" Oct 2 20:12:31.016394 env[1140]: time="2023-10-02T20:12:31.016355726Z" level=info msg="RemoveContainer for \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\"" Oct 2 20:12:31.017052 env[1140]: time="2023-10-02T20:12:31.017023009Z" level=info msg="RemoveContainer for \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\"" Oct 2 20:12:31.017134 env[1140]: time="2023-10-02T20:12:31.017100929Z" level=error msg="RemoveContainer for \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\" failed" error="failed to set removing state for container \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\": container is already in removing state" Oct 2 20:12:31.017266 kubelet[1440]: E1002 20:12:31.017249 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\": container is already in removing state" containerID="0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba" Oct 2 20:12:31.017356 kubelet[1440]: E1002 20:12:31.017345 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba": container is already in removing state; Skipping pod "cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)" Oct 2 20:12:31.017458 kubelet[1440]: E1002 20:12:31.017448 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:31.017747 kubelet[1440]: E1002 20:12:31.017728 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:12:31.019141 env[1140]: time="2023-10-02T20:12:31.019099697Z" level=info msg="RemoveContainer for \"0be3206c228d5621c120ec39e7bc7189f1bf60d5ff49f5f253190d943a3b49ba\" returns successfully" Oct 2 20:12:31.615279 kubelet[1440]: E1002 20:12:31.615214 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:32.615827 kubelet[1440]: E1002 20:12:32.615761 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:33.538974 kubelet[1440]: E1002 20:12:33.538935 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:33.616569 kubelet[1440]: E1002 20:12:33.616532 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:33.795522 kubelet[1440]: W1002 20:12:33.795389 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b9eff24_bfeb_48e7_b6c0_1d2c37b26f9c.slice/cri-containerd-007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50.scope WatchSource:0}: task 007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50 not found: not found Oct 2 20:12:34.616979 kubelet[1440]: E1002 20:12:34.616866 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:35.617041 kubelet[1440]: E1002 20:12:35.616967 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:36.617554 kubelet[1440]: E1002 20:12:36.617498 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:37.617857 kubelet[1440]: E1002 20:12:37.617783 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:38.539927 kubelet[1440]: E1002 20:12:38.539889 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:38.618343 kubelet[1440]: E1002 20:12:38.618292 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:39.619181 kubelet[1440]: E1002 20:12:39.619105 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:40.620238 kubelet[1440]: E1002 20:12:40.620150 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:41.620622 kubelet[1440]: E1002 20:12:41.620562 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:42.621118 kubelet[1440]: E1002 20:12:42.621072 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:43.435325 kubelet[1440]: E1002 20:12:43.435275 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:43.540588 kubelet[1440]: E1002 20:12:43.540548 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:43.621381 kubelet[1440]: E1002 20:12:43.621355 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:43.621895 kubelet[1440]: E1002 20:12:43.621874 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:12:43.622155 kubelet[1440]: E1002 20:12:43.622132 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-hrkgh_kube-system(2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c)\"" pod="kube-system/cilium-hrkgh" podUID=2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c Oct 2 20:12:44.622766 kubelet[1440]: E1002 20:12:44.622642 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:45.623471 kubelet[1440]: E1002 20:12:45.623434 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:46.623759 kubelet[1440]: E1002 20:12:46.623692 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:47.624830 kubelet[1440]: E1002 20:12:47.624803 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:47.830384 env[1140]: time="2023-10-02T20:12:47.830341524Z" level=info msg="StopPodSandbox for \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\"" Oct 2 20:12:47.831996 env[1140]: time="2023-10-02T20:12:47.830407444Z" level=info msg="Container to stop \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:12:47.831622 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5-shm.mount: Deactivated successfully. Oct 2 20:12:47.838402 systemd[1]: cri-containerd-fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5.scope: Deactivated successfully. Oct 2 20:12:47.838000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:12:47.839509 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 20:12:47.839577 kernel: audit: type=1334 audit(1696277567.838:712): prog-id=79 op=UNLOAD Oct 2 20:12:47.840503 env[1140]: time="2023-10-02T20:12:47.840471124Z" level=info msg="StopContainer for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" with timeout 30 (s)" Oct 2 20:12:47.840838 env[1140]: time="2023-10-02T20:12:47.840816045Z" level=info msg="Stop container \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" with signal terminated" Oct 2 20:12:47.845000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:12:47.846266 kernel: audit: type=1334 audit(1696277567.845:713): prog-id=82 op=UNLOAD Oct 2 20:12:47.854744 systemd[1]: cri-containerd-399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553.scope: Deactivated successfully. Oct 2 20:12:47.854000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:12:47.856239 kernel: audit: type=1334 audit(1696277567.854:714): prog-id=83 op=UNLOAD Oct 2 20:12:47.858000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:12:47.861358 kernel: audit: type=1334 audit(1696277567.858:715): prog-id=86 op=UNLOAD Oct 2 20:12:47.861248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5-rootfs.mount: Deactivated successfully. Oct 2 20:12:47.867425 env[1140]: time="2023-10-02T20:12:47.867373751Z" level=info msg="shim disconnected" id=fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5 Oct 2 20:12:47.867425 env[1140]: time="2023-10-02T20:12:47.867422231Z" level=warning msg="cleaning up after shim disconnected" id=fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5 namespace=k8s.io Oct 2 20:12:47.867595 env[1140]: time="2023-10-02T20:12:47.867432831Z" level=info msg="cleaning up dead shim" Oct 2 20:12:47.874584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553-rootfs.mount: Deactivated successfully. Oct 2 20:12:47.877396 env[1140]: time="2023-10-02T20:12:47.877304270Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:12:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2334 runtime=io.containerd.runc.v2\n" Oct 2 20:12:47.877618 env[1140]: time="2023-10-02T20:12:47.877591231Z" level=info msg="TearDown network for sandbox \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" successfully" Oct 2 20:12:47.877670 env[1140]: time="2023-10-02T20:12:47.877615911Z" level=info msg="StopPodSandbox for \"fb65cdccbd88652757b824f8d4f9d914ea7307ed7b4ae2c493d55d48006c5ad5\" returns successfully" Oct 2 20:12:47.879161 env[1140]: time="2023-10-02T20:12:47.879075637Z" level=info msg="shim disconnected" id=399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553 Oct 2 20:12:47.879161 env[1140]: time="2023-10-02T20:12:47.879129877Z" level=warning msg="cleaning up after shim disconnected" id=399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553 namespace=k8s.io Oct 2 20:12:47.879161 env[1140]: time="2023-10-02T20:12:47.879140317Z" level=info msg="cleaning up dead shim" Oct 2 20:12:47.888595 env[1140]: time="2023-10-02T20:12:47.888559315Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:12:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Oct 2 20:12:47.890094 env[1140]: time="2023-10-02T20:12:47.890058521Z" level=info msg="StopContainer for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" returns successfully" Oct 2 20:12:47.890496 env[1140]: time="2023-10-02T20:12:47.890472202Z" level=info msg="StopPodSandbox for \"824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc\"" Oct 2 20:12:47.890620 env[1140]: time="2023-10-02T20:12:47.890600083Z" level=info msg="Container to stop \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:12:47.892723 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc-shm.mount: Deactivated successfully. Oct 2 20:12:47.898265 systemd[1]: cri-containerd-824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc.scope: Deactivated successfully. Oct 2 20:12:47.898000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:12:47.900251 kernel: audit: type=1334 audit(1696277567.898:716): prog-id=75 op=UNLOAD Oct 2 20:12:47.904000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:12:47.905237 kernel: audit: type=1334 audit(1696277567.904:717): prog-id=78 op=UNLOAD Oct 2 20:12:47.917533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc-rootfs.mount: Deactivated successfully. Oct 2 20:12:47.921088 env[1140]: time="2023-10-02T20:12:47.921044244Z" level=info msg="shim disconnected" id=824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc Oct 2 20:12:47.921740 env[1140]: time="2023-10-02T20:12:47.921714766Z" level=warning msg="cleaning up after shim disconnected" id=824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc namespace=k8s.io Oct 2 20:12:47.921833 env[1140]: time="2023-10-02T20:12:47.921819487Z" level=info msg="cleaning up dead shim" Oct 2 20:12:47.929688 env[1140]: time="2023-10-02T20:12:47.929650078Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:12:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2384 runtime=io.containerd.runc.v2\n" Oct 2 20:12:47.930050 env[1140]: time="2023-10-02T20:12:47.930024239Z" level=info msg="TearDown network for sandbox \"824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc\" successfully" Oct 2 20:12:47.930142 env[1140]: time="2023-10-02T20:12:47.930124320Z" level=info msg="StopPodSandbox for \"824906496f42552e97f1b7d4dd0fa8c55188f49a0548aa02c347071c384237cc\" returns successfully" Oct 2 20:12:47.955858 kubelet[1440]: I1002 20:12:47.955820 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-net\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.955858 kubelet[1440]: I1002 20:12:47.955863 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-etc-cni-netd\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956008 kubelet[1440]: I1002 20:12:47.955888 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-ipsec-secrets\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956008 kubelet[1440]: I1002 20:12:47.955907 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-bpf-maps\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956008 kubelet[1440]: I1002 20:12:47.955929 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hubble-tls\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956008 kubelet[1440]: I1002 20:12:47.955951 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-config-path\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956008 kubelet[1440]: I1002 20:12:47.955968 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-cgroup\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956008 kubelet[1440]: I1002 20:12:47.955987 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-kernel\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956145 kubelet[1440]: I1002 20:12:47.956008 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-clustermesh-secrets\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956145 kubelet[1440]: I1002 20:12:47.956024 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-run\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956145 kubelet[1440]: I1002 20:12:47.956041 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cni-path\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956145 kubelet[1440]: I1002 20:12:47.956061 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct56h\" (UniqueName: \"kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-kube-api-access-ct56h\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956145 kubelet[1440]: I1002 20:12:47.956079 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hostproc\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956145 kubelet[1440]: I1002 20:12:47.956095 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-xtables-lock\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956299 kubelet[1440]: I1002 20:12:47.956111 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-lib-modules\") pod \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\" (UID: \"2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c\") " Oct 2 20:12:47.956299 kubelet[1440]: I1002 20:12:47.956161 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956299 kubelet[1440]: I1002 20:12:47.956184 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956299 kubelet[1440]: I1002 20:12:47.956198 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956820 kubelet[1440]: W1002 20:12:47.956465 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:12:47.956820 kubelet[1440]: I1002 20:12:47.956480 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956820 kubelet[1440]: I1002 20:12:47.956505 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956820 kubelet[1440]: I1002 20:12:47.956526 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956820 kubelet[1440]: I1002 20:12:47.956731 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956997 kubelet[1440]: I1002 20:12:47.956754 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cni-path" (OuterVolumeSpecName: "cni-path") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956997 kubelet[1440]: I1002 20:12:47.956774 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.956997 kubelet[1440]: I1002 20:12:47.956790 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hostproc" (OuterVolumeSpecName: "hostproc") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:12:47.959362 kubelet[1440]: I1002 20:12:47.959325 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:12:47.961667 kubelet[1440]: I1002 20:12:47.961635 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:12:47.962544 kubelet[1440]: I1002 20:12:47.962520 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:12:47.962701 kubelet[1440]: I1002 20:12:47.962663 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:12:47.962750 kubelet[1440]: I1002 20:12:47.962663 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-kube-api-access-ct56h" (OuterVolumeSpecName: "kube-api-access-ct56h") pod "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c" (UID: "2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c"). InnerVolumeSpecName "kube-api-access-ct56h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:12:48.042328 kubelet[1440]: I1002 20:12:48.042274 1440 scope.go:115] "RemoveContainer" containerID="007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50" Oct 2 20:12:48.044362 env[1140]: time="2023-10-02T20:12:48.044084692Z" level=info msg="RemoveContainer for \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\"" Oct 2 20:12:48.046399 env[1140]: time="2023-10-02T20:12:48.046179060Z" level=info msg="RemoveContainer for \"007d5bd14b2d50ae5047c2ae54a556bdf43995ac45ffd404afa77af0ed86da50\" returns successfully" Oct 2 20:12:48.046752 kubelet[1440]: I1002 20:12:48.046708 1440 scope.go:115] "RemoveContainer" containerID="399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553" Oct 2 20:12:48.047248 systemd[1]: Removed slice kubepods-burstable-pod2b9eff24_bfeb_48e7_b6c0_1d2c37b26f9c.slice. Oct 2 20:12:48.047927 env[1140]: time="2023-10-02T20:12:48.047900187Z" level=info msg="RemoveContainer for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\"" Oct 2 20:12:48.050217 env[1140]: time="2023-10-02T20:12:48.050175836Z" level=info msg="RemoveContainer for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" returns successfully" Oct 2 20:12:48.050463 kubelet[1440]: I1002 20:12:48.050442 1440 scope.go:115] "RemoveContainer" containerID="399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553" Oct 2 20:12:48.050715 env[1140]: time="2023-10-02T20:12:48.050620318Z" level=error msg="ContainerStatus for \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\": not found" Oct 2 20:12:48.050844 kubelet[1440]: E1002 20:12:48.050814 1440 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\": not found" containerID="399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553" Oct 2 20:12:48.050888 kubelet[1440]: I1002 20:12:48.050862 1440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553} err="failed to get container status \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\": rpc error: code = NotFound desc = an error occurred when try to find container \"399ea6fab9691a861d417e78dd5a3f94e282ef9ad19f40abe206fbe7132c1553\": not found" Oct 2 20:12:48.056340 kubelet[1440]: I1002 20:12:48.056295 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a25e6522-9d2b-4126-8442-b48a50f6cdd8-cilium-config-path\") pod \"a25e6522-9d2b-4126-8442-b48a50f6cdd8\" (UID: \"a25e6522-9d2b-4126-8442-b48a50f6cdd8\") " Oct 2 20:12:48.056340 kubelet[1440]: I1002 20:12:48.056339 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw2kf\" (UniqueName: \"kubernetes.io/projected/a25e6522-9d2b-4126-8442-b48a50f6cdd8-kube-api-access-sw2kf\") pod \"a25e6522-9d2b-4126-8442-b48a50f6cdd8\" (UID: \"a25e6522-9d2b-4126-8442-b48a50f6cdd8\") " Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056364 1440 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-xtables-lock\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056375 1440 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-lib-modules\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056384 1440 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cni-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056399 1440 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ct56h\" (UniqueName: \"kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-kube-api-access-ct56h\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056410 1440 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hostproc\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056419 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-net\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056428 1440 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-etc-cni-netd\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056484 kubelet[1440]: I1002 20:12:48.056436 1440 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-hubble-tls\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056445 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-ipsec-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056454 1440 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-bpf-maps\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056463 1440 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-clustermesh-secrets\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056473 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-run\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056482 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056491 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-cilium-cgroup\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056670 kubelet[1440]: I1002 20:12:48.056499 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b9eff24-bfeb-48e7-b6c0-1d2c37b26f9c-host-proc-sys-kernel\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.056986 kubelet[1440]: W1002 20:12:48.056906 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a25e6522-9d2b-4126-8442-b48a50f6cdd8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:12:48.059250 kubelet[1440]: I1002 20:12:48.059194 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25e6522-9d2b-4126-8442-b48a50f6cdd8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a25e6522-9d2b-4126-8442-b48a50f6cdd8" (UID: "a25e6522-9d2b-4126-8442-b48a50f6cdd8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:12:48.059580 kubelet[1440]: I1002 20:12:48.059554 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25e6522-9d2b-4126-8442-b48a50f6cdd8-kube-api-access-sw2kf" (OuterVolumeSpecName: "kube-api-access-sw2kf") pod "a25e6522-9d2b-4126-8442-b48a50f6cdd8" (UID: "a25e6522-9d2b-4126-8442-b48a50f6cdd8"). InnerVolumeSpecName "kube-api-access-sw2kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:12:48.157102 kubelet[1440]: I1002 20:12:48.157003 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a25e6522-9d2b-4126-8442-b48a50f6cdd8-cilium-config-path\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.157102 kubelet[1440]: I1002 20:12:48.157042 1440 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-sw2kf\" (UniqueName: \"kubernetes.io/projected/a25e6522-9d2b-4126-8442-b48a50f6cdd8-kube-api-access-sw2kf\") on node \"10.0.0.13\" DevicePath \"\"" Oct 2 20:12:48.347841 systemd[1]: Removed slice kubepods-besteffort-poda25e6522_9d2b_4126_8442_b48a50f6cdd8.slice. Oct 2 20:12:48.541621 kubelet[1440]: E1002 20:12:48.541521 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:12:48.626325 kubelet[1440]: E1002 20:12:48.626282 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:12:48.831580 systemd[1]: var-lib-kubelet-pods-2b9eff24\x2dbfeb\x2d48e7\x2db6c0\x2d1d2c37b26f9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dct56h.mount: Deactivated successfully. Oct 2 20:12:48.831686 systemd[1]: var-lib-kubelet-pods-a25e6522\x2d9d2b\x2d4126\x2d8442\x2db48a50f6cdd8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsw2kf.mount: Deactivated successfully. Oct 2 20:12:48.831748 systemd[1]: var-lib-kubelet-pods-2b9eff24\x2dbfeb\x2d48e7\x2db6c0\x2d1d2c37b26f9c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:12:48.831813 systemd[1]: var-lib-kubelet-pods-2b9eff24\x2dbfeb\x2d48e7\x2db6c0\x2d1d2c37b26f9c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:12:48.831866 systemd[1]: var-lib-kubelet-pods-2b9eff24\x2dbfeb\x2d48e7\x2db6c0\x2d1d2c37b26f9c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.