Oct 2 19:11:00.723061 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:11:00.723082 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:11:00.723090 kernel: efi: EFI v2.70 by EDK II Oct 2 19:11:00.723095 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:11:00.723100 kernel: random: crng init done Oct 2 19:11:00.723106 kernel: ACPI: Early table checksum verification disabled Oct 2 19:11:00.723112 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:11:00.723118 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:11:00.723124 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723129 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723135 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723140 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723145 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723151 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723159 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723164 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723170 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:11:00.723176 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:11:00.723181 kernel: NUMA: Failed to initialise from firmware Oct 2 19:11:00.723187 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:11:00.723192 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Oct 2 19:11:00.723198 kernel: Zone ranges: Oct 2 19:11:00.723204 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:11:00.723211 kernel: DMA32 empty Oct 2 19:11:00.723216 kernel: Normal empty Oct 2 19:11:00.723222 kernel: Movable zone start for each node Oct 2 19:11:00.723227 kernel: Early memory node ranges Oct 2 19:11:00.723233 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:11:00.723239 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:11:00.723244 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:11:00.723250 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:11:00.723255 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:11:00.723261 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:11:00.723267 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:11:00.723272 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:11:00.723279 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:11:00.723285 kernel: psci: probing for conduit method from ACPI. Oct 2 19:11:00.723290 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:11:00.723296 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:11:00.723301 kernel: psci: Trusted OS migration not required Oct 2 19:11:00.723310 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:11:00.723316 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:11:00.723323 kernel: ACPI: SRAT not present Oct 2 19:11:00.723330 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:11:00.723336 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:11:00.723342 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:11:00.723348 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:11:00.723354 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:11:00.723360 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:11:00.723366 kernel: CPU features: detected: Spectre-v4 Oct 2 19:11:00.723372 kernel: CPU features: detected: Spectre-BHB Oct 2 19:11:00.723379 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:11:00.723385 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:11:00.723391 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:11:00.723397 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:11:00.723403 kernel: Policy zone: DMA Oct 2 19:11:00.723410 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:11:00.723416 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:11:00.723422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:11:00.723428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:11:00.723434 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:11:00.723441 kernel: Memory: 2459272K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113016K reserved, 0K cma-reserved) Oct 2 19:11:00.723448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:11:00.723454 kernel: trace event string verifier disabled Oct 2 19:11:00.723460 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:11:00.723467 kernel: rcu: RCU event tracing is enabled. Oct 2 19:11:00.723473 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:11:00.723479 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:11:00.723485 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:11:00.723491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:11:00.723498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:11:00.723503 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:11:00.723510 kernel: GICv3: 256 SPIs implemented Oct 2 19:11:00.723517 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:11:00.723523 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:11:00.723528 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:11:00.723534 kernel: GICv3: 16 PPIs implemented Oct 2 19:11:00.723541 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:11:00.723546 kernel: ACPI: SRAT not present Oct 2 19:11:00.723552 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:11:00.723558 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:11:00.723565 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:11:00.723571 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:11:00.723577 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:11:00.723583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:11:00.723590 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:11:00.723597 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:11:00.723603 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:11:00.723609 kernel: arm-pv: using stolen time PV Oct 2 19:11:00.723615 kernel: Console: colour dummy device 80x25 Oct 2 19:11:00.723621 kernel: ACPI: Core revision 20210730 Oct 2 19:11:00.723635 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:11:00.723642 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:11:00.723648 kernel: LSM: Security Framework initializing Oct 2 19:11:00.723654 kernel: SELinux: Initializing. Oct 2 19:11:00.723662 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:11:00.723668 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:11:00.723674 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:11:00.723680 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:11:00.723686 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:11:00.723692 kernel: Remapping and enabling EFI services. Oct 2 19:11:00.723699 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:11:00.723705 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:11:00.723711 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:11:00.723718 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:11:00.723725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:11:00.723731 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:11:00.723737 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:11:00.723743 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:11:00.723750 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:11:00.723756 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:11:00.723762 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:11:00.723768 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:11:00.723774 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:11:00.723782 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:11:00.723788 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:11:00.723794 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:11:00.723800 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:11:00.723810 kernel: SMP: Total of 4 processors activated. Oct 2 19:11:00.723818 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:11:00.723825 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:11:00.723832 kernel: CPU features: detected: Common not Private translations Oct 2 19:11:00.723839 kernel: CPU features: detected: CRC32 instructions Oct 2 19:11:00.723845 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:11:00.723874 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:11:00.723881 kernel: CPU features: detected: Privileged Access Never Oct 2 19:11:00.723890 kernel: CPU features: detected: RAS Extension Support Oct 2 19:11:00.723897 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:11:00.723903 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:11:00.723910 kernel: alternatives: patching kernel code Oct 2 19:11:00.723916 kernel: devtmpfs: initialized Oct 2 19:11:00.723930 kernel: KASLR enabled Oct 2 19:11:00.723937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:11:00.723943 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:11:00.723950 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:11:00.723956 kernel: SMBIOS 3.0.0 present. Oct 2 19:11:00.723963 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:11:00.723969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:11:00.723976 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:11:00.723983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:11:00.723991 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:11:00.723997 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:11:00.724004 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Oct 2 19:11:00.724010 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:11:00.724017 kernel: cpuidle: using governor menu Oct 2 19:11:00.724023 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:11:00.724030 kernel: ASID allocator initialised with 32768 entries Oct 2 19:11:00.724036 kernel: ACPI: bus type PCI registered Oct 2 19:11:00.724043 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:11:00.724051 kernel: Serial: AMBA PL011 UART driver Oct 2 19:11:00.724057 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:11:00.724064 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:11:00.724071 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:11:00.724077 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:11:00.724084 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:11:00.724091 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:11:00.724097 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:11:00.724104 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:11:00.724111 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:11:00.724118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:11:00.724124 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:11:00.724131 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:11:00.724137 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:11:00.724144 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:11:00.724150 kernel: ACPI: Interpreter enabled Oct 2 19:11:00.724157 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:11:00.724163 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:11:00.724171 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:11:00.724177 kernel: printk: console [ttyAMA0] enabled Oct 2 19:11:00.724184 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:11:00.724322 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:11:00.724386 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:11:00.724444 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:11:00.724505 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:11:00.724566 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:11:00.724574 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:11:00.724581 kernel: PCI host bridge to bus 0000:00 Oct 2 19:11:00.724662 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:11:00.724718 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:11:00.724774 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:11:00.724898 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:11:00.724985 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:11:00.725055 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:11:00.725116 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:11:00.725175 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:11:00.725234 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:11:00.725293 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:11:00.725351 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:11:00.725413 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:11:00.725466 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:11:00.725518 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:11:00.725569 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:11:00.725577 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:11:00.725584 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:11:00.725591 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:11:00.725599 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:11:00.725606 kernel: iommu: Default domain type: Translated Oct 2 19:11:00.725612 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:11:00.725619 kernel: vgaarb: loaded Oct 2 19:11:00.725625 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:11:00.725642 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:11:00.725649 kernel: PTP clock support registered Oct 2 19:11:00.725656 kernel: Registered efivars operations Oct 2 19:11:00.725662 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:11:00.725669 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:11:00.725677 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:11:00.725684 kernel: pnp: PnP ACPI init Oct 2 19:11:00.725750 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:11:00.725760 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:11:00.725766 kernel: NET: Registered PF_INET protocol family Oct 2 19:11:00.725773 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:11:00.725780 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:11:00.725787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:11:00.725795 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:11:00.725802 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:11:00.725808 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:11:00.725815 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:11:00.725821 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:11:00.725828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:11:00.725834 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:11:00.725841 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:11:00.725848 kernel: kvm [1]: HYP mode not available Oct 2 19:11:00.725855 kernel: Initialise system trusted keyrings Oct 2 19:11:00.725862 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:11:00.725868 kernel: Key type asymmetric registered Oct 2 19:11:00.725875 kernel: Asymmetric key parser 'x509' registered Oct 2 19:11:00.725881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:11:00.725888 kernel: io scheduler mq-deadline registered Oct 2 19:11:00.725894 kernel: io scheduler kyber registered Oct 2 19:11:00.725901 kernel: io scheduler bfq registered Oct 2 19:11:00.725907 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:11:00.725915 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:11:00.725927 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:11:00.725988 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:11:00.725997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:11:00.726004 kernel: thunder_xcv, ver 1.0 Oct 2 19:11:00.726010 kernel: thunder_bgx, ver 1.0 Oct 2 19:11:00.726017 kernel: nicpf, ver 1.0 Oct 2 19:11:00.726023 kernel: nicvf, ver 1.0 Oct 2 19:11:00.726095 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:11:00.726153 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:11:00 UTC (1696273860) Oct 2 19:11:00.726162 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:11:00.726169 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:11:00.726175 kernel: Segment Routing with IPv6 Oct 2 19:11:00.726181 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:11:00.726188 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:11:00.726194 kernel: Key type dns_resolver registered Oct 2 19:11:00.726201 kernel: registered taskstats version 1 Oct 2 19:11:00.726209 kernel: Loading compiled-in X.509 certificates Oct 2 19:11:00.726216 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:11:00.726223 kernel: Key type .fscrypt registered Oct 2 19:11:00.726229 kernel: Key type fscrypt-provisioning registered Oct 2 19:11:00.726236 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:11:00.726242 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:11:00.726248 kernel: ima: No architecture policies found Oct 2 19:11:00.726255 kernel: Freeing unused kernel memory: 34560K Oct 2 19:11:00.726261 kernel: Run /init as init process Oct 2 19:11:00.726269 kernel: with arguments: Oct 2 19:11:00.726275 kernel: /init Oct 2 19:11:00.726282 kernel: with environment: Oct 2 19:11:00.726288 kernel: HOME=/ Oct 2 19:11:00.726295 kernel: TERM=linux Oct 2 19:11:00.726301 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:11:00.726309 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:11:00.726318 systemd[1]: Detected virtualization kvm. Oct 2 19:11:00.726326 systemd[1]: Detected architecture arm64. Oct 2 19:11:00.726333 systemd[1]: Running in initrd. Oct 2 19:11:00.726340 systemd[1]: No hostname configured, using default hostname. Oct 2 19:11:00.726347 systemd[1]: Hostname set to . Oct 2 19:11:00.726354 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:11:00.726361 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:11:00.726368 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:11:00.726374 systemd[1]: Reached target cryptsetup.target. Oct 2 19:11:00.726383 systemd[1]: Reached target paths.target. Oct 2 19:11:00.726390 systemd[1]: Reached target slices.target. Oct 2 19:11:00.726396 systemd[1]: Reached target swap.target. Oct 2 19:11:00.726403 systemd[1]: Reached target timers.target. Oct 2 19:11:00.726411 systemd[1]: Listening on iscsid.socket. Oct 2 19:11:00.726418 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:11:00.726425 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:11:00.726433 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:11:00.726440 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:11:00.726448 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:11:00.726455 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:11:00.726461 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:11:00.726468 systemd[1]: Reached target sockets.target. Oct 2 19:11:00.726475 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:11:00.726482 systemd[1]: Finished network-cleanup.service. Oct 2 19:11:00.726489 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:11:00.726497 systemd[1]: Starting systemd-journald.service... Oct 2 19:11:00.726505 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:11:00.726512 systemd[1]: Starting systemd-resolved.service... Oct 2 19:11:00.726518 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:11:00.726525 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:11:00.726533 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:11:00.726540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:11:00.726547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:11:00.726554 kernel: audit: type=1130 audit(1696273860.723:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.726563 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:11:00.726573 systemd-journald[290]: Journal started Oct 2 19:11:00.726612 systemd-journald[290]: Runtime Journal (/run/log/journal/73421ab676d442c7b95504e486cbdb6d) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:11:00.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.718386 systemd-modules-load[291]: Inserted module 'overlay' Oct 2 19:11:00.729353 kernel: audit: type=1130 audit(1696273860.726:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.729370 systemd[1]: Started systemd-journald.service. Oct 2 19:11:00.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.730687 kernel: audit: type=1130 audit(1696273860.729:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.731361 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:11:00.736957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:11:00.737660 kernel: Bridge firewalling registered Oct 2 19:11:00.737557 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 2 19:11:00.746928 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:11:00.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.749087 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:11:00.750761 kernel: audit: type=1130 audit(1696273860.748:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.750782 kernel: SCSI subsystem initialized Oct 2 19:11:00.751584 systemd-resolved[292]: Positive Trust Anchors: Oct 2 19:11:00.751596 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:11:00.751624 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:11:00.755917 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 2 19:11:00.757431 systemd[1]: Started systemd-resolved.service. Oct 2 19:11:00.760533 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:11:00.760593 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:11:00.760609 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:11:00.760915 systemd[1]: Reached target nss-lookup.target. Oct 2 19:11:00.763647 kernel: audit: type=1130 audit(1696273860.760:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.763695 dracut-cmdline[308]: dracut-dracut-053 Oct 2 19:11:00.763887 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 2 19:11:00.764790 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:11:00.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.767611 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:11:00.771370 kernel: audit: type=1130 audit(1696273860.764:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.767847 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:11:00.777969 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:11:00.780661 kernel: audit: type=1130 audit(1696273860.777:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.837656 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:11:00.845652 kernel: iscsi: registered transport (tcp) Oct 2 19:11:00.858764 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:11:00.858780 kernel: QLogic iSCSI HBA Driver Oct 2 19:11:00.905409 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:11:00.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.906832 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:11:00.909131 kernel: audit: type=1130 audit(1696273860.905:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:00.959731 kernel: raid6: neonx8 gen() 13781 MB/s Oct 2 19:11:00.976652 kernel: raid6: neonx8 xor() 10770 MB/s Oct 2 19:11:00.993651 kernel: raid6: neonx4 gen() 13521 MB/s Oct 2 19:11:01.010680 kernel: raid6: neonx4 xor() 11318 MB/s Oct 2 19:11:01.027863 kernel: raid6: neonx2 gen() 12909 MB/s Oct 2 19:11:01.044680 kernel: raid6: neonx2 xor() 10104 MB/s Oct 2 19:11:01.062048 kernel: raid6: neonx1 gen() 10483 MB/s Oct 2 19:11:01.078796 kernel: raid6: neonx1 xor() 8782 MB/s Oct 2 19:11:01.097664 kernel: raid6: int64x8 gen() 5573 MB/s Oct 2 19:11:01.112677 kernel: raid6: int64x8 xor() 3549 MB/s Oct 2 19:11:01.129649 kernel: raid6: int64x4 gen() 7237 MB/s Oct 2 19:11:01.146657 kernel: raid6: int64x4 xor() 3852 MB/s Oct 2 19:11:01.163648 kernel: raid6: int64x2 gen() 6152 MB/s Oct 2 19:11:01.180650 kernel: raid6: int64x2 xor() 3224 MB/s Oct 2 19:11:01.197662 kernel: raid6: int64x1 gen() 5039 MB/s Oct 2 19:11:01.214829 kernel: raid6: int64x1 xor() 2580 MB/s Oct 2 19:11:01.214857 kernel: raid6: using algorithm neonx8 gen() 13781 MB/s Oct 2 19:11:01.214867 kernel: raid6: .... xor() 10770 MB/s, rmw enabled Oct 2 19:11:01.214876 kernel: raid6: using neon recovery algorithm Oct 2 19:11:01.225657 kernel: xor: measuring software checksum speed Oct 2 19:11:01.226646 kernel: 8regs : 17286 MB/sec Oct 2 19:11:01.227722 kernel: 32regs : 20749 MB/sec Oct 2 19:11:01.227733 kernel: arm64_neon : 27788 MB/sec Oct 2 19:11:01.227741 kernel: xor: using function: arm64_neon (27788 MB/sec) Oct 2 19:11:01.282665 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:11:01.294813 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:11:01.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:01.296000 audit: BPF prog-id=7 op=LOAD Oct 2 19:11:01.296000 audit: BPF prog-id=8 op=LOAD Oct 2 19:11:01.297653 kernel: audit: type=1130 audit(1696273861.294:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:01.297943 systemd[1]: Starting systemd-udevd.service... Oct 2 19:11:01.310756 systemd-udevd[491]: Using default interface naming scheme 'v252'. Oct 2 19:11:01.314079 systemd[1]: Started systemd-udevd.service. Oct 2 19:11:01.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:01.315975 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:11:01.328818 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Oct 2 19:11:01.360737 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:11:01.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:01.362074 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:11:01.396326 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:11:01.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:01.440013 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:11:01.441648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:11:01.453868 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:11:01.456159 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (546) Oct 2 19:11:01.462285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:11:01.466908 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:11:01.467596 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:11:01.471285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:11:01.472705 systemd[1]: Starting disk-uuid.service... Oct 2 19:11:01.480651 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:11:02.495536 disk-uuid[565]: The operation has completed successfully. Oct 2 19:11:02.496344 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:11:02.545724 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:11:02.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.545811 systemd[1]: Finished disk-uuid.service. Oct 2 19:11:02.547166 systemd[1]: Starting verity-setup.service... Oct 2 19:11:02.564691 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:11:02.588493 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:11:02.590470 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:11:02.593352 systemd[1]: Finished verity-setup.service. Oct 2 19:11:02.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.643398 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:11:02.644398 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:11:02.644074 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:11:02.644805 systemd[1]: Starting ignition-setup.service... Oct 2 19:11:02.646378 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:11:02.654470 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:11:02.654519 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:11:02.654530 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:11:02.667714 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:11:02.674235 systemd[1]: Finished ignition-setup.service. Oct 2 19:11:02.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.675546 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:11:02.763028 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:11:02.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.763076 ignition[645]: Ignition 2.14.0 Oct 2 19:11:02.763000 audit: BPF prog-id=9 op=LOAD Oct 2 19:11:02.765249 systemd[1]: Starting systemd-networkd.service... Oct 2 19:11:02.763082 ignition[645]: Stage: fetch-offline Oct 2 19:11:02.763121 ignition[645]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:11:02.763130 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:11:02.763285 ignition[645]: parsed url from cmdline: "" Oct 2 19:11:02.763288 ignition[645]: no config URL provided Oct 2 19:11:02.763294 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:11:02.763300 ignition[645]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:11:02.763318 ignition[645]: op(1): [started] loading QEMU firmware config module Oct 2 19:11:02.763330 ignition[645]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:11:02.771651 ignition[645]: op(1): [finished] loading QEMU firmware config module Oct 2 19:11:02.771764 ignition[645]: QEMU firmware config was not found. Ignoring... Oct 2 19:11:02.790440 ignition[645]: parsing config with SHA512: ed83cc1170b5466cfed241997726bed43d48cabc5d6eff1b8bf67edbc17b05be9d7d714ab3fba4d96077d7e439311b15e2de954c7a8f13e86a40d3d15299926f Oct 2 19:11:02.791146 systemd-networkd[743]: lo: Link UP Oct 2 19:11:02.791159 systemd-networkd[743]: lo: Gained carrier Oct 2 19:11:02.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.791822 systemd-networkd[743]: Enumeration completed Oct 2 19:11:02.791945 systemd[1]: Started systemd-networkd.service. Oct 2 19:11:02.792029 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:11:02.793167 systemd[1]: Reached target network.target. Oct 2 19:11:02.793420 systemd-networkd[743]: eth0: Link UP Oct 2 19:11:02.793424 systemd-networkd[743]: eth0: Gained carrier Oct 2 19:11:02.795557 systemd[1]: Starting iscsiuio.service... Oct 2 19:11:02.808536 systemd[1]: Started iscsiuio.service. Oct 2 19:11:02.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.810342 systemd[1]: Starting iscsid.service... Oct 2 19:11:02.814574 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:11:02.814574 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:11:02.814574 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:11:02.814574 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:11:02.814574 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:11:02.814574 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:11:02.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.816491 unknown[645]: fetched base config from "system" Oct 2 19:11:02.816884 ignition[645]: fetch-offline: fetch-offline passed Oct 2 19:11:02.816498 unknown[645]: fetched user config from "qemu" Oct 2 19:11:02.816952 ignition[645]: Ignition finished successfully Oct 2 19:11:02.818148 systemd[1]: Started iscsid.service. Oct 2 19:11:02.820711 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:11:02.821212 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:11:02.822944 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:11:02.824306 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:11:02.825041 systemd[1]: Starting ignition-kargs.service... Oct 2 19:11:02.833393 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:11:02.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.834230 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:11:02.835216 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:11:02.835027 ignition[751]: Ignition 2.14.0 Oct 2 19:11:02.837092 systemd[1]: Reached target remote-fs.target. Oct 2 19:11:02.835033 ignition[751]: Stage: kargs Oct 2 19:11:02.838904 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:11:02.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.835129 ignition[751]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:11:02.839838 systemd[1]: Finished ignition-kargs.service. Oct 2 19:11:02.835138 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:11:02.841350 systemd[1]: Starting ignition-disks.service... Oct 2 19:11:02.835904 ignition[751]: kargs: kargs passed Oct 2 19:11:02.835961 ignition[751]: Ignition finished successfully Oct 2 19:11:02.848773 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:11:02.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.849735 ignition[765]: Ignition 2.14.0 Oct 2 19:11:02.849741 ignition[765]: Stage: disks Oct 2 19:11:02.849834 ignition[765]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:11:02.849843 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:11:02.851691 systemd[1]: Finished ignition-disks.service. Oct 2 19:11:02.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.850859 ignition[765]: disks: disks passed Oct 2 19:11:02.852974 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:11:02.850904 ignition[765]: Ignition finished successfully Oct 2 19:11:02.853859 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:11:02.854733 systemd[1]: Reached target local-fs.target. Oct 2 19:11:02.855735 systemd[1]: Reached target sysinit.target. Oct 2 19:11:02.856648 systemd[1]: Reached target basic.target. Oct 2 19:11:02.858426 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:11:02.871468 systemd-fsck[777]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:11:02.874283 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:11:02.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.876057 systemd[1]: Mounting sysroot.mount... Oct 2 19:11:02.885476 systemd[1]: Mounted sysroot.mount. Oct 2 19:11:02.886405 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:11:02.886085 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:11:02.887964 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:11:02.888715 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:11:02.888761 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:11:02.888787 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:11:02.891512 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:11:02.892781 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:11:02.898375 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:11:02.903771 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:11:02.907597 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:11:02.912365 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:11:02.942718 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:11:02.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.944231 systemd[1]: Starting ignition-mount.service... Oct 2 19:11:02.945439 systemd[1]: Starting sysroot-boot.service... Oct 2 19:11:02.951546 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:11:02.961178 ignition[830]: INFO : Ignition 2.14.0 Oct 2 19:11:02.961178 ignition[830]: INFO : Stage: mount Oct 2 19:11:02.962346 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:11:02.962346 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:11:02.962346 ignition[830]: INFO : mount: mount passed Oct 2 19:11:02.962346 ignition[830]: INFO : Ignition finished successfully Oct 2 19:11:02.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:02.963256 systemd[1]: Finished ignition-mount.service. Oct 2 19:11:02.967373 systemd[1]: Finished sysroot-boot.service. Oct 2 19:11:02.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:03.600918 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:11:03.608116 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) Oct 2 19:11:03.608157 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:11:03.608168 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:11:03.609000 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:11:03.611740 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:11:03.612990 systemd[1]: Starting ignition-files.service... Oct 2 19:11:03.628472 ignition[858]: INFO : Ignition 2.14.0 Oct 2 19:11:03.628472 ignition[858]: INFO : Stage: files Oct 2 19:11:03.629702 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:11:03.629702 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:11:03.629702 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:11:03.633867 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:11:03.633867 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:11:03.636308 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:11:03.636308 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:11:03.638259 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:11:03.638259 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:11:03.638259 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:11:03.636447 unknown[858]: wrote ssh authorized keys file for user: core Oct 2 19:11:03.881316 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:11:04.192291 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:11:04.192291 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:11:04.195499 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:11:04.195499 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:11:04.451189 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:11:04.554982 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:11:04.556936 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:11:04.558343 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:11:04.559455 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:11:04.614470 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:11:04.679356 systemd-networkd[743]: eth0: Gained IPv6LL Oct 2 19:11:05.034856 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:11:05.037192 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:11:05.037192 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:11:05.037192 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:11:05.072480 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:11:05.948158 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:11:05.950305 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:11:05.950305 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:11:05.950305 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:11:05.950305 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:11:05.950305 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:11:05.950305 ignition[858]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:11:05.972859 ignition[858]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:11:05.972859 ignition[858]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:11:05.972859 ignition[858]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:11:05.972859 ignition[858]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:11:05.972859 ignition[858]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:11:06.002404 ignition[858]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:11:06.003478 ignition[858]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:11:06.003478 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:11:06.003478 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:11:06.003478 ignition[858]: INFO : files: files passed Oct 2 19:11:06.003478 ignition[858]: INFO : Ignition finished successfully Oct 2 19:11:06.008886 systemd[1]: Finished ignition-files.service. Oct 2 19:11:06.012357 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:11:06.012382 kernel: audit: type=1130 audit(1696273866.009:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.025023 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:11:06.025671 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:11:06.026400 systemd[1]: Starting ignition-quench.service... Oct 2 19:11:06.029737 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:11:06.029823 systemd[1]: Finished ignition-quench.service. Oct 2 19:11:06.034586 kernel: audit: type=1130 audit(1696273866.030:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.034610 kernel: audit: type=1131 audit(1696273866.030:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.037520 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:11:06.040748 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:11:06.041429 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:11:06.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.042462 systemd[1]: Reached target ignition-complete.target. Oct 2 19:11:06.045698 kernel: audit: type=1130 audit(1696273866.041:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.045836 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:11:06.060095 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:11:06.060189 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:11:06.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.061383 systemd[1]: Reached target initrd-fs.target. Oct 2 19:11:06.065872 kernel: audit: type=1130 audit(1696273866.060:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.065895 kernel: audit: type=1131 audit(1696273866.060:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.065445 systemd[1]: Reached target initrd.target. Oct 2 19:11:06.066361 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:11:06.067085 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:11:06.085693 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:11:06.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.087149 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:11:06.089320 kernel: audit: type=1130 audit(1696273866.085:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.095669 systemd[1]: Stopped target network.target. Oct 2 19:11:06.096291 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:11:06.097271 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:11:06.098286 systemd[1]: Stopped target timers.target. Oct 2 19:11:06.099184 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:11:06.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.099290 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:11:06.103211 kernel: audit: type=1131 audit(1696273866.099:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.100196 systemd[1]: Stopped target initrd.target. Oct 2 19:11:06.102911 systemd[1]: Stopped target basic.target. Oct 2 19:11:06.103956 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:11:06.105102 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:11:06.106243 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:11:06.107495 systemd[1]: Stopped target remote-fs.target. Oct 2 19:11:06.108615 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:11:06.109931 systemd[1]: Stopped target sysinit.target. Oct 2 19:11:06.110991 systemd[1]: Stopped target local-fs.target. Oct 2 19:11:06.112104 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:11:06.113212 systemd[1]: Stopped target swap.target. Oct 2 19:11:06.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.114269 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:11:06.118915 kernel: audit: type=1131 audit(1696273866.114:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.114380 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:11:06.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.115614 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:11:06.122988 kernel: audit: type=1131 audit(1696273866.119:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.118406 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:11:06.118508 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:11:06.119788 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:11:06.119888 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:11:06.122715 systemd[1]: Stopped target paths.target. Oct 2 19:11:06.123705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:11:06.127674 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:11:06.129142 systemd[1]: Stopped target slices.target. Oct 2 19:11:06.129979 systemd[1]: Stopped target sockets.target. Oct 2 19:11:06.131102 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:11:06.131181 systemd[1]: Closed iscsid.socket. Oct 2 19:11:06.132147 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:11:06.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.132209 systemd[1]: Closed iscsiuio.socket. Oct 2 19:11:06.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.133312 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:11:06.133412 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:11:06.134606 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:11:06.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.134718 systemd[1]: Stopped ignition-files.service. Oct 2 19:11:06.136653 systemd[1]: Stopping ignition-mount.service... Oct 2 19:11:06.137286 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:11:06.137409 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:11:06.139126 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:11:06.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.139996 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:11:06.140972 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:11:06.141809 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:11:06.141936 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:11:06.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.142882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:11:06.146559 ignition[898]: INFO : Ignition 2.14.0 Oct 2 19:11:06.146559 ignition[898]: INFO : Stage: umount Oct 2 19:11:06.146559 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:11:06.146559 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:11:06.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.142976 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:11:06.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.151864 ignition[898]: INFO : umount: umount passed Oct 2 19:11:06.151864 ignition[898]: INFO : Ignition finished successfully Oct 2 19:11:06.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.146685 systemd-networkd[743]: eth0: DHCPv6 lease lost Oct 2 19:11:06.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.154000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:11:06.147226 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:11:06.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.147305 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:11:06.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.148479 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:11:06.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.148559 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:11:06.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.149626 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:11:06.149718 systemd[1]: Stopped ignition-mount.service. Oct 2 19:11:06.151070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:11:06.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.151104 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:11:06.152232 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:11:06.152269 systemd[1]: Stopped ignition-disks.service. Oct 2 19:11:06.153200 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:11:06.153235 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:11:06.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.154281 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:11:06.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.154331 systemd[1]: Stopped ignition-setup.service. Oct 2 19:11:06.156023 systemd[1]: Stopping network-cleanup.service... Oct 2 19:11:06.157202 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:11:06.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.157255 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:11:06.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.182000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:11:06.157975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:11:06.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.158013 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:11:06.161619 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:11:06.161675 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:11:06.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.163730 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:11:06.168667 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:11:06.168746 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:11:06.169263 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:11:06.169354 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:11:06.173856 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:11:06.173987 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:11:06.175988 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:11:06.176069 systemd[1]: Stopped network-cleanup.service. Oct 2 19:11:06.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.176804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:11:06.176839 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:11:06.177869 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:11:06.177904 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:11:06.178890 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:11:06.178937 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:11:06.181302 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:11:06.181342 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:11:06.182340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:11:06.182374 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:11:06.184119 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:11:06.185094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:11:06.185148 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:11:06.191582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:11:06.191679 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:11:06.231699 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:11:06.231795 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:11:06.232920 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:11:06.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.233727 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:11:06.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.233771 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:11:06.235383 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:11:06.242621 systemd[1]: Switching root. Oct 2 19:11:06.253909 iscsid[749]: iscsid shutting down. Oct 2 19:11:06.254400 systemd-journald[290]: Journal stopped Oct 2 19:11:08.319754 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Oct 2 19:11:08.321640 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:11:08.321656 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:11:08.321668 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:11:08.321678 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:11:08.321688 kernel: SELinux: policy capability open_perms=1 Oct 2 19:11:08.321699 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:11:08.321717 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:11:08.321727 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:11:08.321736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:11:08.321746 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:11:08.321755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:11:08.321765 systemd[1]: Successfully loaded SELinux policy in 30.898ms. Oct 2 19:11:08.321793 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.822ms. Oct 2 19:11:08.321804 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:11:08.321816 systemd[1]: Detected virtualization kvm. Oct 2 19:11:08.321826 systemd[1]: Detected architecture arm64. Oct 2 19:11:08.321836 systemd[1]: Detected first boot. Oct 2 19:11:08.321846 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:11:08.321856 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:11:08.321874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:11:08.321886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:11:08.321903 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:11:08.321916 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:11:08.321928 systemd[1]: Stopped iscsiuio.service. Oct 2 19:11:08.321938 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:11:08.321950 systemd[1]: Stopped iscsid.service. Oct 2 19:11:08.321960 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:11:08.321970 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:11:08.321980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:11:08.321991 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:11:08.322001 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:11:08.322011 systemd[1]: Created slice system-getty.slice. Oct 2 19:11:08.322033 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:11:08.322044 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:11:08.322056 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:11:08.322066 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:11:08.322077 systemd[1]: Created slice user.slice. Oct 2 19:11:08.322086 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:11:08.322097 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:11:08.322108 systemd[1]: Set up automount boot.automount. Oct 2 19:11:08.322119 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:11:08.322130 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:11:08.322141 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:11:08.322151 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:11:08.322162 systemd[1]: Reached target integritysetup.target. Oct 2 19:11:08.322172 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:11:08.322182 systemd[1]: Reached target remote-fs.target. Oct 2 19:11:08.322192 systemd[1]: Reached target slices.target. Oct 2 19:11:08.322202 systemd[1]: Reached target swap.target. Oct 2 19:11:08.322213 systemd[1]: Reached target torcx.target. Oct 2 19:11:08.322223 systemd[1]: Reached target veritysetup.target. Oct 2 19:11:08.322234 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:11:08.322245 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:11:08.322255 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:11:08.322265 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:11:08.322275 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:11:08.322286 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:11:08.322296 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:11:08.322306 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:11:08.322317 systemd[1]: Mounting media.mount... Oct 2 19:11:08.322328 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:11:08.322339 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:11:08.322349 systemd[1]: Mounting tmp.mount... Oct 2 19:11:08.322359 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:11:08.322369 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:11:08.322381 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:11:08.322391 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:11:08.322401 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:11:08.322411 systemd[1]: Starting modprobe@drm.service... Oct 2 19:11:08.322422 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:11:08.322432 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:11:08.322442 systemd[1]: Starting modprobe@loop.service... Oct 2 19:11:08.322454 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:11:08.322465 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:11:08.322475 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:11:08.322485 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:11:08.322496 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:11:08.322505 systemd[1]: Stopped systemd-journald.service. Oct 2 19:11:08.322517 kernel: loop: module loaded Oct 2 19:11:08.322526 systemd[1]: Starting systemd-journald.service... Oct 2 19:11:08.322537 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:11:08.322547 kernel: fuse: init (API version 7.34) Oct 2 19:11:08.322557 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:11:08.322567 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:11:08.322577 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:11:08.322587 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:11:08.322598 systemd[1]: Stopped verity-setup.service. Oct 2 19:11:08.322608 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:11:08.322620 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:11:08.322639 systemd[1]: Mounted media.mount. Oct 2 19:11:08.322650 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:11:08.322660 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:11:08.322670 systemd[1]: Mounted tmp.mount. Oct 2 19:11:08.322680 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:11:08.322690 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:11:08.322701 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:11:08.322712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:11:08.322723 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:11:08.322733 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:11:08.322743 systemd[1]: Finished modprobe@drm.service. Oct 2 19:11:08.322753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:11:08.322763 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:11:08.322775 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:11:08.322785 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:11:08.322795 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:11:08.322805 systemd[1]: Finished modprobe@loop.service. Oct 2 19:11:08.322818 systemd-journald[992]: Journal started Oct 2 19:11:08.322869 systemd-journald[992]: Runtime Journal (/run/log/journal/73421ab676d442c7b95504e486cbdb6d) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:11:06.318000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:11:06.462000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:11:06.462000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:11:06.462000 audit: BPF prog-id=10 op=LOAD Oct 2 19:11:06.462000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:11:06.462000 audit: BPF prog-id=11 op=LOAD Oct 2 19:11:06.462000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:11:08.201000 audit: BPF prog-id=12 op=LOAD Oct 2 19:11:08.201000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:11:08.201000 audit: BPF prog-id=13 op=LOAD Oct 2 19:11:08.201000 audit: BPF prog-id=14 op=LOAD Oct 2 19:11:08.201000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:11:08.201000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:11:08.201000 audit: BPF prog-id=15 op=LOAD Oct 2 19:11:08.201000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:11:08.201000 audit: BPF prog-id=16 op=LOAD Oct 2 19:11:08.201000 audit: BPF prog-id=17 op=LOAD Oct 2 19:11:08.201000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:11:08.201000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:11:08.202000 audit: BPF prog-id=18 op=LOAD Oct 2 19:11:08.202000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:11:08.202000 audit: BPF prog-id=19 op=LOAD Oct 2 19:11:08.202000 audit: BPF prog-id=20 op=LOAD Oct 2 19:11:08.202000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:11:08.202000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:11:08.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.213000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:11:08.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.283000 audit: BPF prog-id=21 op=LOAD Oct 2 19:11:08.285000 audit: BPF prog-id=22 op=LOAD Oct 2 19:11:08.286000 audit: BPF prog-id=23 op=LOAD Oct 2 19:11:08.286000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:11:08.286000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:11:08.324074 systemd[1]: Started systemd-journald.service. Oct 2 19:11:08.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.318000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:11:08.318000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffebfef7a0 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:08.318000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:11:08.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.510706 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:11:08.199865 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:11:08.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:06.511579 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:11:08.199875 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:11:06.511599 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:11:08.203726 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:11:06.511642 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:11:08.324894 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:11:06.511652 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:11:08.325802 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:11:06.511684 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:11:08.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.326719 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:11:06.511697 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:11:06.511892 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:11:06.511939 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:11:06.511951 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:11:06.512429 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:11:06.512466 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:11:06.512483 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:11:08.327834 systemd[1]: Reached target network-pre.target. Oct 2 19:11:06.512497 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:11:06.512514 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:11:06.512527 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:11:07.962666 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:11:07.962944 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:11:07.963046 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:11:07.963204 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:11:07.963254 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:11:07.963311 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2023-10-02T19:11:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:11:08.329555 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:11:08.331169 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:11:08.331839 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:11:08.337267 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:11:08.339091 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:11:08.339859 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:11:08.340960 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:11:08.341831 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:11:08.342940 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:11:08.347343 systemd-journald[992]: Time spent on flushing to /var/log/journal/73421ab676d442c7b95504e486cbdb6d is 21.503ms for 985 entries. Oct 2 19:11:08.347343 systemd-journald[992]: System Journal (/var/log/journal/73421ab676d442c7b95504e486cbdb6d) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:11:08.385734 systemd-journald[992]: Received client request to flush runtime journal. Oct 2 19:11:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.346662 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:11:08.348494 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:11:08.349302 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:11:08.386415 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:11:08.353375 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:11:08.355575 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:11:08.356422 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:11:08.359944 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:11:08.363467 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:11:08.365388 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:11:08.370338 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:11:08.386710 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:11:08.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.716148 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:11:08.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.716000 audit: BPF prog-id=24 op=LOAD Oct 2 19:11:08.716000 audit: BPF prog-id=25 op=LOAD Oct 2 19:11:08.716000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:11:08.716000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:11:08.718122 systemd[1]: Starting systemd-udevd.service... Oct 2 19:11:08.737712 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Oct 2 19:11:08.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.749000 audit: BPF prog-id=26 op=LOAD Oct 2 19:11:08.757000 audit: BPF prog-id=27 op=LOAD Oct 2 19:11:08.757000 audit: BPF prog-id=28 op=LOAD Oct 2 19:11:08.757000 audit: BPF prog-id=29 op=LOAD Oct 2 19:11:08.748837 systemd[1]: Started systemd-udevd.service. Oct 2 19:11:08.750882 systemd[1]: Starting systemd-networkd.service... Oct 2 19:11:08.758683 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:11:08.786698 systemd[1]: Started systemd-userdbd.service. Oct 2 19:11:08.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.793532 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:11:08.804680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:11:08.842899 systemd-networkd[1041]: lo: Link UP Oct 2 19:11:08.842908 systemd-networkd[1041]: lo: Gained carrier Oct 2 19:11:08.843239 systemd-networkd[1041]: Enumeration completed Oct 2 19:11:08.843329 systemd[1]: Started systemd-networkd.service. Oct 2 19:11:08.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.844432 systemd-networkd[1041]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:11:08.847404 systemd-networkd[1041]: eth0: Link UP Oct 2 19:11:08.847412 systemd-networkd[1041]: eth0: Gained carrier Oct 2 19:11:08.859009 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:11:08.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.860877 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:11:08.873003 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:11:08.873794 systemd-networkd[1041]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:11:08.895478 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:11:08.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.896293 systemd[1]: Reached target cryptsetup.target. Oct 2 19:11:08.898077 systemd[1]: Starting lvm2-activation.service... Oct 2 19:11:08.901690 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:11:08.935415 systemd[1]: Finished lvm2-activation.service. Oct 2 19:11:08.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:08.936181 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:11:08.936786 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:11:08.936815 systemd[1]: Reached target local-fs.target. Oct 2 19:11:08.937349 systemd[1]: Reached target machines.target. Oct 2 19:11:08.939095 systemd[1]: Starting ldconfig.service... Oct 2 19:11:08.939972 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:11:08.940031 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:11:08.941085 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:11:08.942788 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:11:08.944598 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:11:08.946285 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:11:08.946337 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:11:08.947320 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:11:08.948209 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Oct 2 19:11:08.949324 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:11:08.961821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:11:08.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.031427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:11:09.032023 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:11:09.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.033538 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:11:09.036316 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:11:09.039074 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:11:09.053773 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Oct 2 19:11:09.053773 systemd-fsck[1079]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:11:09.055483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:11:09.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.058001 systemd[1]: Mounting boot.mount... Oct 2 19:11:09.070081 systemd[1]: Mounted boot.mount. Oct 2 19:11:09.078206 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:11:09.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.131424 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:11:09.133433 systemd[1]: Starting audit-rules.service... Oct 2 19:11:09.135056 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:11:09.136758 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:11:09.137000 audit: BPF prog-id=30 op=LOAD Oct 2 19:11:09.140000 audit: BPF prog-id=31 op=LOAD Oct 2 19:11:09.139459 systemd[1]: Starting systemd-resolved.service... Oct 2 19:11:09.143623 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:11:09.145220 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:11:09.146370 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:11:09.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.147604 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:11:09.152541 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:11:09.153000 audit[1094]: SYSTEM_BOOT pid=1094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.156594 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:11:09.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.157694 systemd[1]: Finished ldconfig.service. Oct 2 19:11:09.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.166367 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:11:09.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.168237 systemd[1]: Starting systemd-update-done.service... Oct 2 19:11:09.176045 systemd[1]: Finished systemd-update-done.service. Oct 2 19:11:09.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:09.183000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:11:09.183000 audit[1104]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe41036c0 a2=420 a3=0 items=0 ppid=1083 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:09.183554 augenrules[1104]: No rules Oct 2 19:11:09.183000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:11:09.184428 systemd[1]: Finished audit-rules.service. Oct 2 19:11:09.200163 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:11:09.200209 systemd-resolved[1087]: Positive Trust Anchors: Oct 2 19:11:09.200215 systemd-resolved[1087]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:11:09.200242 systemd-resolved[1087]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:11:09.201147 systemd-timesyncd[1091]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:11:09.201208 systemd-timesyncd[1091]: Initial clock synchronization to Mon 2023-10-02 19:11:09.153884 UTC. Oct 2 19:11:09.201249 systemd[1]: Reached target time-set.target. Oct 2 19:11:09.211973 systemd-resolved[1087]: Defaulting to hostname 'linux'. Oct 2 19:11:09.213308 systemd[1]: Started systemd-resolved.service. Oct 2 19:11:09.213999 systemd[1]: Reached target network.target. Oct 2 19:11:09.214553 systemd[1]: Reached target nss-lookup.target. Oct 2 19:11:09.215184 systemd[1]: Reached target sysinit.target. Oct 2 19:11:09.215807 systemd[1]: Started motdgen.path. Oct 2 19:11:09.216347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:11:09.217297 systemd[1]: Started logrotate.timer. Oct 2 19:11:09.217974 systemd[1]: Started mdadm.timer. Oct 2 19:11:09.218487 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:11:09.219138 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:11:09.219171 systemd[1]: Reached target paths.target. Oct 2 19:11:09.219719 systemd[1]: Reached target timers.target. Oct 2 19:11:09.220578 systemd[1]: Listening on dbus.socket. Oct 2 19:11:09.222091 systemd[1]: Starting docker.socket... Oct 2 19:11:09.225187 systemd[1]: Listening on sshd.socket. Oct 2 19:11:09.225874 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:11:09.226289 systemd[1]: Listening on docker.socket. Oct 2 19:11:09.226956 systemd[1]: Reached target sockets.target. Oct 2 19:11:09.227529 systemd[1]: Reached target basic.target. Oct 2 19:11:09.228141 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:11:09.228167 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:11:09.229102 systemd[1]: Starting containerd.service... Oct 2 19:11:09.230656 systemd[1]: Starting dbus.service... Oct 2 19:11:09.232061 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:11:09.233713 systemd[1]: Starting extend-filesystems.service... Oct 2 19:11:09.234369 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:11:09.235587 systemd[1]: Starting motdgen.service... Oct 2 19:11:09.236822 jq[1114]: false Oct 2 19:11:09.238869 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:11:09.240910 systemd[1]: Starting prepare-critools.service... Oct 2 19:11:09.242822 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:11:09.244658 systemd[1]: Starting sshd-keygen.service... Oct 2 19:11:09.247430 systemd[1]: Starting systemd-logind.service... Oct 2 19:11:09.248106 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:11:09.248165 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:11:09.249251 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:11:09.249956 systemd[1]: Starting update-engine.service... Oct 2 19:11:09.252022 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:11:09.254350 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:11:09.257497 jq[1131]: true Oct 2 19:11:09.254515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:11:09.256232 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:11:09.256383 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:11:09.262424 extend-filesystems[1115]: Found vda Oct 2 19:11:09.262424 extend-filesystems[1115]: Found vda1 Oct 2 19:11:09.262424 extend-filesystems[1115]: Found vda2 Oct 2 19:11:09.267834 extend-filesystems[1115]: Found vda3 Oct 2 19:11:09.267834 extend-filesystems[1115]: Found usr Oct 2 19:11:09.267834 extend-filesystems[1115]: Found vda4 Oct 2 19:11:09.267834 extend-filesystems[1115]: Found vda6 Oct 2 19:11:09.267834 extend-filesystems[1115]: Found vda7 Oct 2 19:11:09.267834 extend-filesystems[1115]: Found vda9 Oct 2 19:11:09.271350 jq[1137]: true Oct 2 19:11:09.274759 tar[1136]: crictl Oct 2 19:11:09.274960 extend-filesystems[1115]: Checking size of /dev/vda9 Oct 2 19:11:09.275016 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:11:09.275176 systemd[1]: Finished motdgen.service. Oct 2 19:11:09.278982 tar[1135]: ./ Oct 2 19:11:09.278982 tar[1135]: ./macvlan Oct 2 19:11:09.284020 extend-filesystems[1115]: Old size kept for /dev/vda9 Oct 2 19:11:09.284428 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:11:09.284577 systemd[1]: Finished extend-filesystems.service. Oct 2 19:11:09.308952 systemd-logind[1127]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:11:09.309127 systemd-logind[1127]: New seat seat0. Oct 2 19:11:09.312365 dbus-daemon[1113]: [system] SELinux support is enabled Oct 2 19:11:09.312532 systemd[1]: Started dbus.service. Oct 2 19:11:09.314788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:11:09.314825 systemd[1]: Reached target system-config.target. Oct 2 19:11:09.315462 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:11:09.315483 systemd[1]: Reached target user-config.target. Oct 2 19:11:09.323682 systemd[1]: Started systemd-logind.service. Oct 2 19:11:09.337925 bash[1166]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:11:09.338683 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:11:09.360006 tar[1135]: ./static Oct 2 19:11:09.366382 env[1138]: time="2023-10-02T19:11:09.366328400Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:11:09.373489 update_engine[1130]: I1002 19:11:09.373252 1130 main.cc:92] Flatcar Update Engine starting Oct 2 19:11:09.375610 systemd[1]: Started update-engine.service. Oct 2 19:11:09.378009 systemd[1]: Started locksmithd.service. Oct 2 19:11:09.379441 update_engine[1130]: I1002 19:11:09.379415 1130 update_check_scheduler.cc:74] Next update check in 3m48s Oct 2 19:11:09.394808 env[1138]: time="2023-10-02T19:11:09.394759800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:11:09.394952 env[1138]: time="2023-10-02T19:11:09.394931880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:11:09.398933 env[1138]: time="2023-10-02T19:11:09.398790560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:11:09.398933 env[1138]: time="2023-10-02T19:11:09.398924160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399015 tar[1135]: ./vlan Oct 2 19:11:09.399153 env[1138]: time="2023-10-02T19:11:09.399127080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399153 env[1138]: time="2023-10-02T19:11:09.399148280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399223 env[1138]: time="2023-10-02T19:11:09.399161440Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:11:09.399223 env[1138]: time="2023-10-02T19:11:09.399171040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399264 env[1138]: time="2023-10-02T19:11:09.399240520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399434 env[1138]: time="2023-10-02T19:11:09.399406000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399562 env[1138]: time="2023-10-02T19:11:09.399541160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:11:09.399562 env[1138]: time="2023-10-02T19:11:09.399559000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:11:09.399618 env[1138]: time="2023-10-02T19:11:09.399608000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:11:09.399664 env[1138]: time="2023-10-02T19:11:09.399619120Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:11:09.417715 env[1138]: time="2023-10-02T19:11:09.417675600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:11:09.417715 env[1138]: time="2023-10-02T19:11:09.417711880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:11:09.417819 env[1138]: time="2023-10-02T19:11:09.417724840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:11:09.417819 env[1138]: time="2023-10-02T19:11:09.417757440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.417819 env[1138]: time="2023-10-02T19:11:09.417773720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.417819 env[1138]: time="2023-10-02T19:11:09.417786600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.417819 env[1138]: time="2023-10-02T19:11:09.417806160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.418164 env[1138]: time="2023-10-02T19:11:09.418138960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.418196 env[1138]: time="2023-10-02T19:11:09.418165480Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.418196 env[1138]: time="2023-10-02T19:11:09.418181760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.418196 env[1138]: time="2023-10-02T19:11:09.418193480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.418297 env[1138]: time="2023-10-02T19:11:09.418206440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:11:09.418355 env[1138]: time="2023-10-02T19:11:09.418333880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:11:09.418426 env[1138]: time="2023-10-02T19:11:09.418410960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:11:09.418697 env[1138]: time="2023-10-02T19:11:09.418678080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:11:09.418750 env[1138]: time="2023-10-02T19:11:09.418708400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418750 env[1138]: time="2023-10-02T19:11:09.418723680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:11:09.418849 env[1138]: time="2023-10-02T19:11:09.418833280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418849 env[1138]: time="2023-10-02T19:11:09.418848200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418916 env[1138]: time="2023-10-02T19:11:09.418860920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418916 env[1138]: time="2023-10-02T19:11:09.418872560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418916 env[1138]: time="2023-10-02T19:11:09.418884240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418916 env[1138]: time="2023-10-02T19:11:09.418904720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.418916 env[1138]: time="2023-10-02T19:11:09.418916440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.419012 env[1138]: time="2023-10-02T19:11:09.418928440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.419012 env[1138]: time="2023-10-02T19:11:09.418941320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:11:09.419076 env[1138]: time="2023-10-02T19:11:09.419056560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.419104 env[1138]: time="2023-10-02T19:11:09.419076200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.419104 env[1138]: time="2023-10-02T19:11:09.419089920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.419146 env[1138]: time="2023-10-02T19:11:09.419102040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:11:09.419146 env[1138]: time="2023-10-02T19:11:09.419116360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:11:09.419146 env[1138]: time="2023-10-02T19:11:09.419127280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:11:09.419146 env[1138]: time="2023-10-02T19:11:09.419143680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:11:09.419217 env[1138]: time="2023-10-02T19:11:09.419191720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:11:09.419436 env[1138]: time="2023-10-02T19:11:09.419386200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.419440880Z" level=info msg="Connect containerd service" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.419471240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420122800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420442160Z" level=info msg="Start subscribing containerd event" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420499640Z" level=info msg="Start recovering state" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420526000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420565680Z" level=info msg="Start event monitor" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420600720Z" level=info msg="Start snapshots syncer" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420562520Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420612440Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420622000Z" level=info msg="Start streaming server" Oct 2 19:11:09.421917 env[1138]: time="2023-10-02T19:11:09.420665960Z" level=info msg="containerd successfully booted in 0.056484s" Oct 2 19:11:09.420770 systemd[1]: Started containerd.service. Oct 2 19:11:09.430392 tar[1135]: ./portmap Oct 2 19:11:09.455799 tar[1135]: ./host-local Oct 2 19:11:09.478217 tar[1135]: ./vrf Oct 2 19:11:09.503282 tar[1135]: ./bridge Oct 2 19:11:09.538666 tar[1135]: ./tuning Oct 2 19:11:09.568012 tar[1135]: ./firewall Oct 2 19:11:09.599980 tar[1135]: ./host-device Oct 2 19:11:09.606417 systemd[1]: Finished prepare-critools.service. Oct 2 19:11:09.615102 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:11:09.628083 tar[1135]: ./sbr Oct 2 19:11:09.652147 tar[1135]: ./loopback Oct 2 19:11:09.675260 tar[1135]: ./dhcp Oct 2 19:11:09.739650 tar[1135]: ./ptp Oct 2 19:11:09.767625 tar[1135]: ./ipvlan Oct 2 19:11:09.794879 tar[1135]: ./bandwidth Oct 2 19:11:09.801832 systemd[1]: Created slice system-sshd.slice. Oct 2 19:11:09.834107 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:11:10.437729 systemd-networkd[1041]: eth0: Gained IPv6LL Oct 2 19:11:12.157140 sshd_keygen[1139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:11:12.176056 systemd[1]: Finished sshd-keygen.service. Oct 2 19:11:12.178182 systemd[1]: Starting issuegen.service... Oct 2 19:11:12.179787 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:37838.service. Oct 2 19:11:12.184354 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:11:12.184520 systemd[1]: Finished issuegen.service. Oct 2 19:11:12.186535 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:11:12.193280 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:11:12.195159 systemd[1]: Started getty@tty1.service. Oct 2 19:11:12.196904 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:11:12.197705 systemd[1]: Reached target getty.target. Oct 2 19:11:12.198325 systemd[1]: Reached target multi-user.target. Oct 2 19:11:12.200315 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:11:12.207000 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:11:12.207137 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:11:12.207997 systemd[1]: Startup finished in 595ms (kernel) + 5.709s (initrd) + 5.929s (userspace) = 12.234s. Oct 2 19:11:12.227915 sshd[1189]: Accepted publickey for core from 10.0.0.1 port 37838 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.230139 sshd[1189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.237665 systemd[1]: Created slice user-500.slice. Oct 2 19:11:12.238666 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:11:12.240619 systemd-logind[1127]: New session 1 of user core. Oct 2 19:11:12.246314 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:11:12.247558 systemd[1]: Starting user@500.service... Oct 2 19:11:12.250738 (systemd)[1199]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.312841 systemd[1199]: Queued start job for default target default.target. Oct 2 19:11:12.313266 systemd[1199]: Reached target paths.target. Oct 2 19:11:12.313285 systemd[1199]: Reached target sockets.target. Oct 2 19:11:12.313297 systemd[1199]: Reached target timers.target. Oct 2 19:11:12.313308 systemd[1199]: Reached target basic.target. Oct 2 19:11:12.313365 systemd[1199]: Reached target default.target. Oct 2 19:11:12.313391 systemd[1199]: Startup finished in 56ms. Oct 2 19:11:12.313422 systemd[1]: Started user@500.service. Oct 2 19:11:12.314422 systemd[1]: Started session-1.scope. Oct 2 19:11:12.365609 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:37852.service. Oct 2 19:11:12.407494 sshd[1208]: Accepted publickey for core from 10.0.0.1 port 37852 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.408892 sshd[1208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.412124 systemd-logind[1127]: New session 2 of user core. Oct 2 19:11:12.412931 systemd[1]: Started session-2.scope. Oct 2 19:11:12.469275 sshd[1208]: pam_unix(sshd:session): session closed for user core Oct 2 19:11:12.472212 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:37852.service: Deactivated successfully. Oct 2 19:11:12.472861 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:11:12.473370 systemd-logind[1127]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:11:12.474726 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:37860.service. Oct 2 19:11:12.475364 systemd-logind[1127]: Removed session 2. Oct 2 19:11:12.507790 sshd[1214]: Accepted publickey for core from 10.0.0.1 port 37860 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.508933 sshd[1214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.512421 systemd-logind[1127]: New session 3 of user core. Oct 2 19:11:12.512778 systemd[1]: Started session-3.scope. Oct 2 19:11:12.563160 sshd[1214]: pam_unix(sshd:session): session closed for user core Oct 2 19:11:12.566506 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:37860.service: Deactivated successfully. Oct 2 19:11:12.567151 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:11:12.567688 systemd-logind[1127]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:11:12.568723 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:37864.service. Oct 2 19:11:12.569351 systemd-logind[1127]: Removed session 3. Oct 2 19:11:12.601820 sshd[1220]: Accepted publickey for core from 10.0.0.1 port 37864 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.602998 sshd[1220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.606106 systemd-logind[1127]: New session 4 of user core. Oct 2 19:11:12.606852 systemd[1]: Started session-4.scope. Oct 2 19:11:12.660906 sshd[1220]: pam_unix(sshd:session): session closed for user core Oct 2 19:11:12.663663 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:37864.service: Deactivated successfully. Oct 2 19:11:12.664245 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:11:12.664724 systemd-logind[1127]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:11:12.665729 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:37872.service. Oct 2 19:11:12.666292 systemd-logind[1127]: Removed session 4. Oct 2 19:11:12.699547 sshd[1226]: Accepted publickey for core from 10.0.0.1 port 37872 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.701049 sshd[1226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.704297 systemd-logind[1127]: New session 5 of user core. Oct 2 19:11:12.705191 systemd[1]: Started session-5.scope. Oct 2 19:11:12.764933 sudo[1229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:11:12.765142 sudo[1229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:11:12.779810 dbus-daemon[1113]: avc: received setenforce notice (enforcing=1) Oct 2 19:11:12.780773 sudo[1229]: pam_unix(sudo:session): session closed for user root Oct 2 19:11:12.782748 sshd[1226]: pam_unix(sshd:session): session closed for user core Oct 2 19:11:12.785684 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:37872.service: Deactivated successfully. Oct 2 19:11:12.786360 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:11:12.786961 systemd-logind[1127]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:11:12.788027 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:37888.service. Oct 2 19:11:12.788738 systemd-logind[1127]: Removed session 5. Oct 2 19:11:12.822220 sshd[1233]: Accepted publickey for core from 10.0.0.1 port 37888 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.823278 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.826062 systemd-logind[1127]: New session 6 of user core. Oct 2 19:11:12.826809 systemd[1]: Started session-6.scope. Oct 2 19:11:12.878960 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:11:12.879164 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:11:12.881881 sudo[1237]: pam_unix(sudo:session): session closed for user root Oct 2 19:11:12.886351 sudo[1236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:11:12.886538 sudo[1236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:11:12.895363 systemd[1]: Stopping audit-rules.service... Oct 2 19:11:12.895000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:11:12.896804 auditctl[1240]: No rules Oct 2 19:11:12.897869 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:11:12.898019 systemd[1]: Stopped audit-rules.service. Oct 2 19:11:12.898530 kernel: kauditd_printk_skb: 127 callbacks suppressed Oct 2 19:11:12.898572 kernel: audit: type=1305 audit(1696273872.895:167): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:11:12.898588 kernel: audit: type=1300 audit(1696273872.895:167): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeefd2d70 a2=420 a3=0 items=0 ppid=1 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:12.895000 audit[1240]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeefd2d70 a2=420 a3=0 items=0 ppid=1 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:12.899294 systemd[1]: Starting audit-rules.service... Oct 2 19:11:12.901019 kernel: audit: type=1327 audit(1696273872.895:167): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:11:12.895000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:11:12.901709 kernel: audit: type=1131 audit(1696273872.896:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.918123 augenrules[1257]: No rules Oct 2 19:11:12.919724 systemd[1]: Finished audit-rules.service. Oct 2 19:11:12.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.920608 sudo[1236]: pam_unix(sudo:session): session closed for user root Oct 2 19:11:12.919000 audit[1236]: USER_END pid=1236 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.923105 sshd[1233]: pam_unix(sshd:session): session closed for user core Oct 2 19:11:12.924301 kernel: audit: type=1130 audit(1696273872.918:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.924359 kernel: audit: type=1106 audit(1696273872.919:170): pid=1236 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.924380 kernel: audit: type=1104 audit(1696273872.919:171): pid=1236 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.919000 audit[1236]: CRED_DISP pid=1236 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.926549 kernel: audit: type=1106 audit(1696273872.923:172): pid=1233 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.923000 audit[1233]: USER_END pid=1233 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.926010 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:44062.service. Oct 2 19:11:12.926535 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:37888.service: Deactivated successfully. Oct 2 19:11:12.927191 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:11:12.927751 systemd-logind[1127]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:11:12.928576 kernel: audit: type=1104 audit(1696273872.923:173): pid=1233 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.923000 audit[1233]: CRED_DISP pid=1233 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.928995 systemd-logind[1127]: Removed session 6. Oct 2 19:11:12.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.113:22-10.0.0.1:44062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.932479 kernel: audit: type=1130 audit(1696273872.924:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.113:22-10.0.0.1:44062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.113:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:12.957000 audit[1262]: USER_ACCT pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.959653 sshd[1262]: Accepted publickey for core from 10.0.0.1 port 44062 ssh2: RSA SHA256:327EISj6dhgnnLT6sEqi2+uwythtGn0QzwGU+yMaXG4 Oct 2 19:11:12.959000 audit[1262]: CRED_ACQ pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.959000 audit[1262]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff81e1e90 a2=3 a3=1 items=0 ppid=1 pid=1262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:12.959000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:11:12.961185 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:11:12.964513 systemd-logind[1127]: New session 7 of user core. Oct 2 19:11:12.964881 systemd[1]: Started session-7.scope. Oct 2 19:11:12.966000 audit[1262]: USER_START pid=1262 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:12.968000 audit[1266]: CRED_ACQ pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:13.015000 audit[1267]: USER_ACCT pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.017546 sudo[1267]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:11:13.017765 sudo[1267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:11:13.016000 audit[1267]: CRED_REFR pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.018000 audit[1267]: USER_START pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.548903 systemd[1]: Reloading. Oct 2 19:11:13.591538 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2023-10-02T19:11:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:11:13.591564 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2023-10-02T19:11:13Z" level=info msg="torcx already run" Oct 2 19:11:13.647118 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:11:13.647137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:11:13.661921 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit: BPF prog-id=37 op=LOAD Oct 2 19:11:13.707000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit: BPF prog-id=38 op=LOAD Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.707000 audit: BPF prog-id=39 op=LOAD Oct 2 19:11:13.707000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:11:13.707000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit: BPF prog-id=40 op=LOAD Oct 2 19:11:13.708000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit: BPF prog-id=41 op=LOAD Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.708000 audit: BPF prog-id=42 op=LOAD Oct 2 19:11:13.708000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:11:13.708000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.709000 audit: BPF prog-id=43 op=LOAD Oct 2 19:11:13.709000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.710000 audit: BPF prog-id=44 op=LOAD Oct 2 19:11:13.710000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit: BPF prog-id=45 op=LOAD Oct 2 19:11:13.711000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit: BPF prog-id=46 op=LOAD Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.711000 audit: BPF prog-id=47 op=LOAD Oct 2 19:11:13.711000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:11:13.711000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.712000 audit: BPF prog-id=48 op=LOAD Oct 2 19:11:13.712000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit: BPF prog-id=49 op=LOAD Oct 2 19:11:13.713000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit: BPF prog-id=50 op=LOAD Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:13.713000 audit: BPF prog-id=51 op=LOAD Oct 2 19:11:13.713000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:11:13.713000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:11:13.721399 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:11:13.727445 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:11:13.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.728135 systemd[1]: Reached target network-online.target. Oct 2 19:11:13.729520 systemd[1]: Started kubelet.service. Oct 2 19:11:13.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.739943 systemd[1]: Starting coreos-metadata.service... Oct 2 19:11:13.747482 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:11:13.747648 systemd[1]: Finished coreos-metadata.service. Oct 2 19:11:13.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:13.886210 kubelet[1335]: E1002 19:11:13.886083 1335 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:11:13.889623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:11:13.889757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:11:13.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:11:14.049830 systemd[1]: Stopped kubelet.service. Oct 2 19:11:14.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:14.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:14.065173 systemd[1]: Reloading. Oct 2 19:11:14.110293 /usr/lib/systemd/system-generators/torcx-generator[1402]: time="2023-10-02T19:11:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:11:14.110546 /usr/lib/systemd/system-generators/torcx-generator[1402]: time="2023-10-02T19:11:14Z" level=info msg="torcx already run" Oct 2 19:11:14.163754 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:11:14.163771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:11:14.179277 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit: BPF prog-id=52 op=LOAD Oct 2 19:11:14.228000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit: BPF prog-id=53 op=LOAD Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.228000 audit: BPF prog-id=54 op=LOAD Oct 2 19:11:14.229000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:11:14.229000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit: BPF prog-id=55 op=LOAD Oct 2 19:11:14.229000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit: BPF prog-id=56 op=LOAD Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.229000 audit: BPF prog-id=57 op=LOAD Oct 2 19:11:14.230000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:11:14.230000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.231000 audit: BPF prog-id=58 op=LOAD Oct 2 19:11:14.231000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.232000 audit: BPF prog-id=59 op=LOAD Oct 2 19:11:14.232000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit: BPF prog-id=60 op=LOAD Oct 2 19:11:14.233000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit: BPF prog-id=61 op=LOAD Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.233000 audit: BPF prog-id=62 op=LOAD Oct 2 19:11:14.233000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:11:14.233000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.234000 audit: BPF prog-id=63 op=LOAD Oct 2 19:11:14.234000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.235000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit: BPF prog-id=64 op=LOAD Oct 2 19:11:14.236000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit: BPF prog-id=65 op=LOAD Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:14.236000 audit: BPF prog-id=66 op=LOAD Oct 2 19:11:14.236000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:11:14.236000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:11:14.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:14.248811 systemd[1]: Started kubelet.service. Oct 2 19:11:14.295454 kubelet[1440]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:11:14.295454 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:11:14.295454 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:11:14.295855 kubelet[1440]: I1002 19:11:14.295583 1440 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:11:14.297043 kubelet[1440]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:11:14.297043 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:11:14.297043 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:11:15.029094 kubelet[1440]: I1002 19:11:15.029037 1440 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:11:15.029094 kubelet[1440]: I1002 19:11:15.029079 1440 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:11:15.029346 kubelet[1440]: I1002 19:11:15.029320 1440 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:11:15.033915 kubelet[1440]: I1002 19:11:15.033815 1440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:11:15.036142 kubelet[1440]: W1002 19:11:15.036113 1440 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:11:15.037054 kubelet[1440]: I1002 19:11:15.037034 1440 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:11:15.037363 kubelet[1440]: I1002 19:11:15.037348 1440 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:11:15.037425 kubelet[1440]: I1002 19:11:15.037414 1440 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:11:15.037580 kubelet[1440]: I1002 19:11:15.037571 1440 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:11:15.037607 kubelet[1440]: I1002 19:11:15.037584 1440 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:11:15.037692 kubelet[1440]: I1002 19:11:15.037681 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:11:15.041422 kubelet[1440]: I1002 19:11:15.041402 1440 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:11:15.041422 kubelet[1440]: I1002 19:11:15.041421 1440 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:11:15.041501 kubelet[1440]: I1002 19:11:15.041438 1440 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:11:15.041501 kubelet[1440]: I1002 19:11:15.041449 1440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:11:15.041553 kubelet[1440]: E1002 19:11:15.041531 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:15.041591 kubelet[1440]: E1002 19:11:15.041580 1440 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:15.042737 kubelet[1440]: I1002 19:11:15.042714 1440 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:11:15.043711 kubelet[1440]: W1002 19:11:15.043694 1440 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:11:15.044463 kubelet[1440]: I1002 19:11:15.044448 1440 server.go:1175] "Started kubelet" Oct 2 19:11:15.045396 kubelet[1440]: I1002 19:11:15.045374 1440 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:11:15.043000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:15.043000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:11:15.043000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400090f1d0 a1=40000211b8 a2=400090f1a0 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.043000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:11:15.043000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:15.043000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:11:15.043000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400087eae0 a1=40000211d0 a2=400090f260 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.043000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:11:15.045815 kubelet[1440]: I1002 19:11:15.045654 1440 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:11:15.045906 kubelet[1440]: I1002 19:11:15.045682 1440 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:11:15.046055 kubelet[1440]: I1002 19:11:15.046041 1440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:11:15.046607 kubelet[1440]: I1002 19:11:15.046592 1440 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:11:15.047251 kubelet[1440]: E1002 19:11:15.045755 1440 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:11:15.047296 kubelet[1440]: E1002 19:11:15.047263 1440 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:11:15.048654 kubelet[1440]: I1002 19:11:15.048604 1440 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:11:15.048711 kubelet[1440]: I1002 19:11:15.048687 1440 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:11:15.049165 kubelet[1440]: E1002 19:11:15.049138 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:15.064904 kubelet[1440]: W1002 19:11:15.060528 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:15.064904 kubelet[1440]: W1002 19:11:15.060595 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:15.064904 kubelet[1440]: E1002 19:11:15.060704 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:15.064904 kubelet[1440]: W1002 19:11:15.060743 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:15.064904 kubelet[1440]: E1002 19:11:15.060753 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:15.065079 kubelet[1440]: E1002 19:11:15.060794 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a23cc5b70", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 44424560, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 44424560, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.065079 kubelet[1440]: E1002 19:11:15.060879 1440 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.113" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:11:15.065079 kubelet[1440]: E1002 19:11:15.061038 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:15.065181 kubelet[1440]: E1002 19:11:15.063299 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a23f7830e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 47252750, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 47252750, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.065181 kubelet[1440]: I1002 19:11:15.064099 1440 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:11:15.065181 kubelet[1440]: I1002 19:11:15.064111 1440 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:11:15.065181 kubelet[1440]: I1002 19:11:15.064127 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:11:15.066868 kubelet[1440]: E1002 19:11:15.066789 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.069986 kubelet[1440]: E1002 19:11:15.069908 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.070764 kubelet[1440]: E1002 19:11:15.070693 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.103369 kubelet[1440]: I1002 19:11:15.103336 1440 policy_none.go:49] "None policy: Start" Oct 2 19:11:15.103998 kubelet[1440]: I1002 19:11:15.103977 1440 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:11:15.103998 kubelet[1440]: I1002 19:11:15.104001 1440 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:11:15.110645 systemd[1]: Created slice kubepods.slice. Oct 2 19:11:15.114214 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:11:15.116500 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:11:15.117000 audit[1459]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.117000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd4b0ee70 a2=0 a3=1 items=0 ppid=1440 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:11:15.118000 audit[1461]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.118000 audit[1461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffcd249f30 a2=0 a3=1 items=0 ppid=1440 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:11:15.123999 kubelet[1440]: I1002 19:11:15.123978 1440 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:11:15.123000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:15.123000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:11:15.123000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400095eed0 a1=40011401f8 a2=400095eea0 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.123000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:11:15.124184 kubelet[1440]: I1002 19:11:15.124043 1440 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:11:15.124209 kubelet[1440]: I1002 19:11:15.124197 1440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:11:15.125096 kubelet[1440]: E1002 19:11:15.125071 1440 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.113\" not found" Oct 2 19:11:15.126441 kubelet[1440]: E1002 19:11:15.126353 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a289f2f23", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 125350179, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 125350179, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.148559 kubelet[1440]: E1002 19:11:15.148529 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.150186 kubelet[1440]: I1002 19:11:15.150163 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:15.151284 kubelet[1440]: E1002 19:11:15.151247 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.113" Oct 2 19:11:15.151351 kubelet[1440]: E1002 19:11:15.151291 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 150092415, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f14959" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.120000 audit[1463]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.120000 audit[1463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe03e0150 a2=0 a3=1 items=0 ppid=1440 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:11:15.153890 kubelet[1440]: E1002 19:11:15.153735 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 150124387, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1a120" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.154000 audit[1468]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.154000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd763d7f0 a2=0 a3=1 items=0 ppid=1440 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:11:15.154856 kubelet[1440]: E1002 19:11:15.154716 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 150128658, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1b247" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.185000 audit[1473]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.185000 audit[1473]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffee814490 a2=0 a3=1 items=0 ppid=1440 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:11:15.186000 audit[1474]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.186000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffec8e6620 a2=0 a3=1 items=0 ppid=1440 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:11:15.191000 audit[1477]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.191000 audit[1477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffebb84670 a2=0 a3=1 items=0 ppid=1440 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:11:15.196000 audit[1480]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.196000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe9875380 a2=0 a3=1 items=0 ppid=1440 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:11:15.197000 audit[1481]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.197000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe387b560 a2=0 a3=1 items=0 ppid=1440 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:11:15.199000 audit[1482]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.199000 audit[1482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3144d00 a2=0 a3=1 items=0 ppid=1440 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:11:15.201000 audit[1484]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.201000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff61a50f0 a2=0 a3=1 items=0 ppid=1440 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:11:15.203000 audit[1486]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.203000 audit[1486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffec798430 a2=0 a3=1 items=0 ppid=1440 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:11:15.223000 audit[1489]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.223000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffdd275150 a2=0 a3=1 items=0 ppid=1440 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:11:15.225000 audit[1491]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.225000 audit[1491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffffd30f330 a2=0 a3=1 items=0 ppid=1440 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:11:15.232000 audit[1494]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.232000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffffe6d8c60 a2=0 a3=1 items=0 ppid=1440 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:11:15.233696 kubelet[1440]: I1002 19:11:15.233663 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:11:15.233000 audit[1495]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.233000 audit[1495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffec1df1a0 a2=0 a3=1 items=0 ppid=1440 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:11:15.234000 audit[1496]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.234000 audit[1496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc79e3130 a2=0 a3=1 items=0 ppid=1440 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:11:15.234000 audit[1497]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.234000 audit[1497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd9bb21a0 a2=0 a3=1 items=0 ppid=1440 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:11:15.235000 audit[1498]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.235000 audit[1498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff85d3b90 a2=0 a3=1 items=0 ppid=1440 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:11:15.237000 audit[1500]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:15.237000 audit[1500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0d9e420 a2=0 a3=1 items=0 ppid=1440 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:11:15.237000 audit[1501]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.237000 audit[1501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc5c7cd00 a2=0 a3=1 items=0 ppid=1440 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:11:15.238000 audit[1502]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.238000 audit[1502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff84e4e80 a2=0 a3=1 items=0 ppid=1440 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:11:15.240000 audit[1504]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.240000 audit[1504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc799e7e0 a2=0 a3=1 items=0 ppid=1440 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:11:15.241000 audit[1505]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.241000 audit[1505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffff2dcf20 a2=0 a3=1 items=0 ppid=1440 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:11:15.242000 audit[1506]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.242000 audit[1506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc946da40 a2=0 a3=1 items=0 ppid=1440 pid=1506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:11:15.245000 audit[1508]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.245000 audit[1508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffca317050 a2=0 a3=1 items=0 ppid=1440 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:11:15.247000 audit[1510]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.247000 audit[1510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdf2ebeb0 a2=0 a3=1 items=0 ppid=1440 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.247000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:11:15.249216 kubelet[1440]: E1002 19:11:15.249182 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.249000 audit[1512]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.249000 audit[1512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc5f192f0 a2=0 a3=1 items=0 ppid=1440 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:11:15.253000 audit[1514]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.253000 audit[1514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc686d590 a2=0 a3=1 items=0 ppid=1440 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:11:15.256000 audit[1516]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.256000 audit[1516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=fffff9377ba0 a2=0 a3=1 items=0 ppid=1440 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:11:15.257605 kubelet[1440]: I1002 19:11:15.257588 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:11:15.257764 kubelet[1440]: I1002 19:11:15.257753 1440 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:11:15.257826 kubelet[1440]: I1002 19:11:15.257816 1440 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:11:15.257919 kubelet[1440]: E1002 19:11:15.257908 1440 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:11:15.258901 kubelet[1440]: W1002 19:11:15.258880 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:15.258901 kubelet[1440]: E1002 19:11:15.258903 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:15.258000 audit[1517]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.258000 audit[1517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd800bf50 a2=0 a3=1 items=0 ppid=1440 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:11:15.259000 audit[1518]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.259000 audit[1518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde11d150 a2=0 a3=1 items=0 ppid=1440 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:11:15.260000 audit[1519]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:15.260000 audit[1519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe2c54e0 a2=0 a3=1 items=0 ppid=1440 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:15.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:11:15.261735 kubelet[1440]: E1002 19:11:15.261703 1440 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.113" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:11:15.350721 kubelet[1440]: E1002 19:11:15.349881 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.353917 kubelet[1440]: I1002 19:11:15.353901 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:15.354958 kubelet[1440]: E1002 19:11:15.354888 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 353860992, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f14959" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.355183 kubelet[1440]: E1002 19:11:15.355058 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.113" Oct 2 19:11:15.356113 kubelet[1440]: E1002 19:11:15.356031 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 353871569, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1a120" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.446707 kubelet[1440]: E1002 19:11:15.446604 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 353877995, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1b247" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.451008 kubelet[1440]: E1002 19:11:15.450980 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.551462 kubelet[1440]: E1002 19:11:15.551411 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.651991 kubelet[1440]: E1002 19:11:15.651863 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.663034 kubelet[1440]: E1002 19:11:15.663005 1440 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.113" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:11:15.752330 kubelet[1440]: E1002 19:11:15.752279 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.756000 kubelet[1440]: I1002 19:11:15.755978 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:15.757083 kubelet[1440]: E1002 19:11:15.757064 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.113" Oct 2 19:11:15.757143 kubelet[1440]: E1002 19:11:15.757061 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 755910443, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f14959" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.846484 kubelet[1440]: E1002 19:11:15.846421 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 755926449, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1a120" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:15.852717 kubelet[1440]: E1002 19:11:15.852701 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:15.953184 kubelet[1440]: E1002 19:11:15.953110 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.042517 kubelet[1440]: E1002 19:11:16.042481 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:16.046938 kubelet[1440]: E1002 19:11:16.046847 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 755929562, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1b247" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:16.054133 kubelet[1440]: E1002 19:11:16.054115 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.155460 kubelet[1440]: E1002 19:11:16.155432 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.210896 kubelet[1440]: W1002 19:11:16.210805 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:16.210896 kubelet[1440]: E1002 19:11:16.210834 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:16.255702 kubelet[1440]: E1002 19:11:16.255676 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.329027 kubelet[1440]: W1002 19:11:16.328986 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:16.329027 kubelet[1440]: E1002 19:11:16.329025 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:16.356249 kubelet[1440]: E1002 19:11:16.356222 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.456699 kubelet[1440]: E1002 19:11:16.456668 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.464967 kubelet[1440]: E1002 19:11:16.464889 1440 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.113" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:11:16.557389 kubelet[1440]: E1002 19:11:16.557365 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.557980 kubelet[1440]: I1002 19:11:16.557959 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:16.559523 kubelet[1440]: E1002 19:11:16.559443 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 16, 557893981, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f14959" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:16.560403 kubelet[1440]: E1002 19:11:16.560332 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 16, 557905957, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1a120" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:16.562339 kubelet[1440]: E1002 19:11:16.561528 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.113" Oct 2 19:11:16.596503 kubelet[1440]: W1002 19:11:16.596479 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:16.596567 kubelet[1440]: E1002 19:11:16.596510 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:16.637658 kubelet[1440]: W1002 19:11:16.637618 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:16.637658 kubelet[1440]: E1002 19:11:16.637663 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:16.646666 kubelet[1440]: E1002 19:11:16.646578 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 16, 557908991, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1b247" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:16.658053 kubelet[1440]: E1002 19:11:16.658019 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.758653 kubelet[1440]: E1002 19:11:16.758505 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.859015 kubelet[1440]: E1002 19:11:16.858962 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:16.959798 kubelet[1440]: E1002 19:11:16.959758 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.043246 kubelet[1440]: E1002 19:11:17.043143 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:17.060574 kubelet[1440]: E1002 19:11:17.060521 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.161277 kubelet[1440]: E1002 19:11:17.161242 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.261866 kubelet[1440]: E1002 19:11:17.261833 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.362354 kubelet[1440]: E1002 19:11:17.362261 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.462817 kubelet[1440]: E1002 19:11:17.462750 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.563197 kubelet[1440]: E1002 19:11:17.563149 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.663640 kubelet[1440]: E1002 19:11:17.663532 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.763933 kubelet[1440]: E1002 19:11:17.763888 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.826312 kubelet[1440]: W1002 19:11:17.826277 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:17.826312 kubelet[1440]: E1002 19:11:17.826312 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:17.864753 kubelet[1440]: E1002 19:11:17.864701 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:17.965210 kubelet[1440]: E1002 19:11:17.965110 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.043545 kubelet[1440]: E1002 19:11:18.043480 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:18.065911 kubelet[1440]: E1002 19:11:18.065873 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.066691 kubelet[1440]: E1002 19:11:18.066657 1440 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.113" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:11:18.163688 kubelet[1440]: I1002 19:11:18.163664 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:18.164941 kubelet[1440]: E1002 19:11:18.164913 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.113" Oct 2 19:11:18.164979 kubelet[1440]: E1002 19:11:18.164890 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 18, 163609667, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f14959" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:18.165774 kubelet[1440]: E1002 19:11:18.165707 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 18, 163621646, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1a120" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:18.165934 kubelet[1440]: E1002 19:11:18.165908 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.166454 kubelet[1440]: E1002 19:11:18.166391 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 18, 163625679, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1b247" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:18.266386 kubelet[1440]: E1002 19:11:18.265949 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.366779 kubelet[1440]: E1002 19:11:18.366734 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.467373 kubelet[1440]: E1002 19:11:18.467322 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.524967 kubelet[1440]: W1002 19:11:18.524724 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:18.525096 kubelet[1440]: E1002 19:11:18.525083 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:18.568312 kubelet[1440]: E1002 19:11:18.568262 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.664876 kubelet[1440]: W1002 19:11:18.664840 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:18.664876 kubelet[1440]: E1002 19:11:18.664875 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:18.669036 kubelet[1440]: E1002 19:11:18.669002 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.769539 kubelet[1440]: E1002 19:11:18.769507 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.866219 kubelet[1440]: W1002 19:11:18.865928 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:18.866367 kubelet[1440]: E1002 19:11:18.866352 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:18.870121 kubelet[1440]: E1002 19:11:18.870098 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:18.970620 kubelet[1440]: E1002 19:11:18.970592 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.043995 kubelet[1440]: E1002 19:11:19.043953 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:19.071643 kubelet[1440]: E1002 19:11:19.071604 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.172594 kubelet[1440]: E1002 19:11:19.172296 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.272810 kubelet[1440]: E1002 19:11:19.272785 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.373067 kubelet[1440]: E1002 19:11:19.373022 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.473648 kubelet[1440]: E1002 19:11:19.473411 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.573979 kubelet[1440]: E1002 19:11:19.573944 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.674436 kubelet[1440]: E1002 19:11:19.674411 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.775143 kubelet[1440]: E1002 19:11:19.774919 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.875360 kubelet[1440]: E1002 19:11:19.875329 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:19.976109 kubelet[1440]: E1002 19:11:19.976075 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.044568 kubelet[1440]: E1002 19:11:20.044370 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:20.076926 kubelet[1440]: E1002 19:11:20.076903 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.125501 kubelet[1440]: E1002 19:11:20.125484 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:20.177984 kubelet[1440]: E1002 19:11:20.177944 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.278505 kubelet[1440]: E1002 19:11:20.278484 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.379176 kubelet[1440]: E1002 19:11:20.378936 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.479813 kubelet[1440]: E1002 19:11:20.479782 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.580434 kubelet[1440]: E1002 19:11:20.580385 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.681162 kubelet[1440]: E1002 19:11:20.680912 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.781594 kubelet[1440]: E1002 19:11:20.781553 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.882106 kubelet[1440]: E1002 19:11:20.882070 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:20.982845 kubelet[1440]: E1002 19:11:20.982557 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.044982 kubelet[1440]: E1002 19:11:21.044950 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:21.083567 kubelet[1440]: E1002 19:11:21.083537 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.184243 kubelet[1440]: E1002 19:11:21.184198 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.267889 kubelet[1440]: E1002 19:11:21.267640 1440 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.113" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:11:21.284758 kubelet[1440]: E1002 19:11:21.284732 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.365525 kubelet[1440]: I1002 19:11:21.365502 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:21.366784 kubelet[1440]: E1002 19:11:21.366758 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.113" Oct 2 19:11:21.366876 kubelet[1440]: E1002 19:11:21.366801 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f14959", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.113 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63621977, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 21, 365463151, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f14959" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:21.367819 kubelet[1440]: E1002 19:11:21.367749 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1a120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.113 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63644448, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 21, 365475932, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1a120" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:21.369002 kubelet[1440]: E1002 19:11:21.368940 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113.178a601a24f1b247", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.113", UID:"10.0.0.113", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.113 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.113"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 11, 15, 63648839, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 11, 21, 365479208, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.113.178a601a24f1b247" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:11:21.385258 kubelet[1440]: E1002 19:11:21.385221 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.485720 kubelet[1440]: E1002 19:11:21.485687 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.586697 kubelet[1440]: E1002 19:11:21.586283 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.686890 kubelet[1440]: E1002 19:11:21.686853 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.787383 kubelet[1440]: E1002 19:11:21.787346 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.888004 kubelet[1440]: E1002 19:11:21.887788 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:21.988247 kubelet[1440]: E1002 19:11:21.988215 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.045561 kubelet[1440]: E1002 19:11:22.045531 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:22.089121 kubelet[1440]: E1002 19:11:22.089089 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.189849 kubelet[1440]: E1002 19:11:22.189606 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.290087 kubelet[1440]: E1002 19:11:22.290065 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.390441 kubelet[1440]: E1002 19:11:22.390417 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.491124 kubelet[1440]: E1002 19:11:22.490901 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.591526 kubelet[1440]: E1002 19:11:22.591499 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.691906 kubelet[1440]: E1002 19:11:22.691872 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.792563 kubelet[1440]: E1002 19:11:22.792289 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.892793 kubelet[1440]: E1002 19:11:22.892763 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:22.993252 kubelet[1440]: E1002 19:11:22.993215 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.046658 kubelet[1440]: E1002 19:11:23.046546 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:23.084982 kubelet[1440]: W1002 19:11:23.084939 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:23.084982 kubelet[1440]: E1002 19:11:23.084971 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:11:23.094207 kubelet[1440]: E1002 19:11:23.094186 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.194708 kubelet[1440]: E1002 19:11:23.194677 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.295232 kubelet[1440]: E1002 19:11:23.295198 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.365783 kubelet[1440]: W1002 19:11:23.365698 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:23.365927 kubelet[1440]: E1002 19:11:23.365914 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:11:23.396011 kubelet[1440]: E1002 19:11:23.395974 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.496462 kubelet[1440]: E1002 19:11:23.496432 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.597018 kubelet[1440]: E1002 19:11:23.596971 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.669497 kubelet[1440]: W1002 19:11:23.669409 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:23.669658 kubelet[1440]: E1002 19:11:23.669623 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:11:23.697654 kubelet[1440]: E1002 19:11:23.697619 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.772089 kubelet[1440]: W1002 19:11:23.772061 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:23.772253 kubelet[1440]: E1002 19:11:23.772240 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:11:23.798363 kubelet[1440]: E1002 19:11:23.798327 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.898815 kubelet[1440]: E1002 19:11:23.898777 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:23.999314 kubelet[1440]: E1002 19:11:23.999219 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.047638 kubelet[1440]: E1002 19:11:24.047601 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:24.100193 kubelet[1440]: E1002 19:11:24.100162 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.200707 kubelet[1440]: E1002 19:11:24.200669 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.301660 kubelet[1440]: E1002 19:11:24.301532 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.402022 kubelet[1440]: E1002 19:11:24.401978 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.502483 kubelet[1440]: E1002 19:11:24.502453 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.603219 kubelet[1440]: E1002 19:11:24.603129 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.703649 kubelet[1440]: E1002 19:11:24.703597 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.804126 kubelet[1440]: E1002 19:11:24.804094 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:24.904685 kubelet[1440]: E1002 19:11:24.904565 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.005151 kubelet[1440]: E1002 19:11:25.005117 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.031268 kubelet[1440]: I1002 19:11:25.031239 1440 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:11:25.048671 kubelet[1440]: E1002 19:11:25.048641 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:25.105506 kubelet[1440]: E1002 19:11:25.105465 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.125710 kubelet[1440]: E1002 19:11:25.125683 1440 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.113\" not found" Oct 2 19:11:25.126244 kubelet[1440]: E1002 19:11:25.126226 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:25.205688 kubelet[1440]: E1002 19:11:25.205575 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.306087 kubelet[1440]: E1002 19:11:25.306033 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.406408 kubelet[1440]: E1002 19:11:25.406362 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.427881 kubelet[1440]: E1002 19:11:25.427844 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.113" not found Oct 2 19:11:25.507465 kubelet[1440]: E1002 19:11:25.507349 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.607969 kubelet[1440]: E1002 19:11:25.607927 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.708710 kubelet[1440]: E1002 19:11:25.708673 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.809364 kubelet[1440]: E1002 19:11:25.809259 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:25.910008 kubelet[1440]: E1002 19:11:25.909971 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.010441 kubelet[1440]: E1002 19:11:26.010410 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.049809 kubelet[1440]: E1002 19:11:26.049778 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:26.111446 kubelet[1440]: E1002 19:11:26.111348 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.211776 kubelet[1440]: E1002 19:11:26.211735 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.312274 kubelet[1440]: E1002 19:11:26.312246 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.413118 kubelet[1440]: E1002 19:11:26.413026 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.463773 kubelet[1440]: E1002 19:11:26.463733 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.113" not found Oct 2 19:11:26.514047 kubelet[1440]: E1002 19:11:26.514015 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.615075 kubelet[1440]: E1002 19:11:26.615034 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.715916 kubelet[1440]: E1002 19:11:26.715820 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.816325 kubelet[1440]: E1002 19:11:26.816290 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:26.916616 kubelet[1440]: E1002 19:11:26.916585 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.017270 kubelet[1440]: E1002 19:11:27.017158 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.050663 kubelet[1440]: E1002 19:11:27.050606 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:27.118078 kubelet[1440]: E1002 19:11:27.118039 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.218762 kubelet[1440]: E1002 19:11:27.218726 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.319413 kubelet[1440]: E1002 19:11:27.319300 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.419822 kubelet[1440]: E1002 19:11:27.419766 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.520679 kubelet[1440]: E1002 19:11:27.520643 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.621388 kubelet[1440]: E1002 19:11:27.621288 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.672781 kubelet[1440]: E1002 19:11:27.672742 1440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.113\" not found" node="10.0.0.113" Oct 2 19:11:27.721876 kubelet[1440]: E1002 19:11:27.721838 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.768271 kubelet[1440]: I1002 19:11:27.768242 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.113" Oct 2 19:11:27.822247 kubelet[1440]: E1002 19:11:27.822182 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:27.865032 kubelet[1440]: I1002 19:11:27.864997 1440 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.113" Oct 2 19:11:27.923140 kubelet[1440]: E1002 19:11:27.923032 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.023643 kubelet[1440]: E1002 19:11:28.023593 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.051160 kubelet[1440]: E1002 19:11:28.051128 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:28.123702 kubelet[1440]: E1002 19:11:28.123673 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.224366 kubelet[1440]: E1002 19:11:28.224261 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.267131 sudo[1267]: pam_unix(sudo:session): session closed for user root Oct 2 19:11:28.268684 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:11:28.268807 kernel: audit: type=1106 audit(1696273888.266:572): pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:28.266000 audit[1267]: USER_END pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:28.269261 sshd[1262]: pam_unix(sshd:session): session closed for user core Oct 2 19:11:28.266000 audit[1267]: CRED_DISP pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:28.271782 kernel: audit: type=1104 audit(1696273888.266:573): pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:11:28.272067 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:44062.service: Deactivated successfully. Oct 2 19:11:28.269000 audit[1262]: USER_END pid=1262 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:28.272886 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:11:28.275335 kernel: audit: type=1106 audit(1696273888.269:574): pid=1262 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:28.275380 kernel: audit: type=1104 audit(1696273888.269:575): pid=1262 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:28.269000 audit[1262]: CRED_DISP pid=1262 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:11:28.276013 systemd-logind[1127]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:11:28.277014 systemd-logind[1127]: Removed session 7. Oct 2 19:11:28.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.113:22-10.0.0.1:44062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:28.279393 kernel: audit: type=1131 audit(1696273888.271:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.113:22-10.0.0.1:44062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:11:28.324714 kubelet[1440]: E1002 19:11:28.324671 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.425086 kubelet[1440]: E1002 19:11:28.425049 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.525797 kubelet[1440]: E1002 19:11:28.525293 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.625695 kubelet[1440]: E1002 19:11:28.625659 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.726048 kubelet[1440]: E1002 19:11:28.726002 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.826559 kubelet[1440]: E1002 19:11:28.826358 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:28.926778 kubelet[1440]: E1002 19:11:28.926734 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.027131 kubelet[1440]: E1002 19:11:29.027090 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.051642 kubelet[1440]: E1002 19:11:29.051602 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:29.128182 kubelet[1440]: E1002 19:11:29.127987 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.228688 kubelet[1440]: E1002 19:11:29.228646 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.329774 kubelet[1440]: E1002 19:11:29.329739 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.430588 kubelet[1440]: E1002 19:11:29.430315 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.530975 kubelet[1440]: E1002 19:11:29.530927 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.631647 kubelet[1440]: E1002 19:11:29.631602 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.732368 kubelet[1440]: E1002 19:11:29.732103 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.832798 kubelet[1440]: E1002 19:11:29.832763 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:29.933414 kubelet[1440]: E1002 19:11:29.933372 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.033917 kubelet[1440]: E1002 19:11:30.033671 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.052298 kubelet[1440]: E1002 19:11:30.052266 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:30.127042 kubelet[1440]: E1002 19:11:30.127011 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:30.134496 kubelet[1440]: E1002 19:11:30.134461 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.235109 kubelet[1440]: E1002 19:11:30.235063 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.336207 kubelet[1440]: E1002 19:11:30.335903 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.436530 kubelet[1440]: E1002 19:11:30.436486 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.537112 kubelet[1440]: E1002 19:11:30.537043 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.637853 kubelet[1440]: E1002 19:11:30.637587 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.738280 kubelet[1440]: E1002 19:11:30.738227 1440 kubelet.go:2448] "Error getting node" err="node \"10.0.0.113\" not found" Oct 2 19:11:30.838983 kubelet[1440]: I1002 19:11:30.838942 1440 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:11:30.839449 env[1138]: time="2023-10-02T19:11:30.839401775Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:11:30.839708 kubelet[1440]: I1002 19:11:30.839670 1440 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:11:30.840057 kubelet[1440]: E1002 19:11:30.840028 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:31.051562 kubelet[1440]: I1002 19:11:31.051282 1440 apiserver.go:52] "Watching apiserver" Oct 2 19:11:31.052410 kubelet[1440]: E1002 19:11:31.052379 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:31.054927 kubelet[1440]: I1002 19:11:31.054903 1440 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:11:31.054983 kubelet[1440]: I1002 19:11:31.054969 1440 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:11:31.059359 systemd[1]: Created slice kubepods-besteffort-poda0febf0b_e580_4df9_bd12_e0bfe0f124d3.slice. Oct 2 19:11:31.084007 systemd[1]: Created slice kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice. Oct 2 19:11:31.239189 kubelet[1440]: I1002 19:11:31.239153 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-hubble-tls\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239379 kubelet[1440]: I1002 19:11:31.239367 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0febf0b-e580-4df9-bd12-e0bfe0f124d3-xtables-lock\") pod \"kube-proxy-m8mb6\" (UID: \"a0febf0b-e580-4df9-bd12-e0bfe0f124d3\") " pod="kube-system/kube-proxy-m8mb6" Oct 2 19:11:31.239473 kubelet[1440]: I1002 19:11:31.239462 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzqv\" (UniqueName: \"kubernetes.io/projected/a0febf0b-e580-4df9-bd12-e0bfe0f124d3-kube-api-access-gwzqv\") pod \"kube-proxy-m8mb6\" (UID: \"a0febf0b-e580-4df9-bd12-e0bfe0f124d3\") " pod="kube-system/kube-proxy-m8mb6" Oct 2 19:11:31.239620 kubelet[1440]: I1002 19:11:31.239568 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-cgroup\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239620 kubelet[1440]: I1002 19:11:31.239624 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/079c025c-7182-4c3b-9417-081eb20ee218-cilium-config-path\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239718 kubelet[1440]: I1002 19:11:31.239661 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-kernel\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239718 kubelet[1440]: I1002 19:11:31.239690 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-bpf-maps\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239768 kubelet[1440]: I1002 19:11:31.239737 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-hostproc\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239795 kubelet[1440]: I1002 19:11:31.239779 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-etc-cni-netd\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239820 kubelet[1440]: I1002 19:11:31.239804 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-xtables-lock\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239842 kubelet[1440]: I1002 19:11:31.239825 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-net\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239864 kubelet[1440]: I1002 19:11:31.239844 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0febf0b-e580-4df9-bd12-e0bfe0f124d3-kube-proxy\") pod \"kube-proxy-m8mb6\" (UID: \"a0febf0b-e580-4df9-bd12-e0bfe0f124d3\") " pod="kube-system/kube-proxy-m8mb6" Oct 2 19:11:31.239886 kubelet[1440]: I1002 19:11:31.239865 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0febf0b-e580-4df9-bd12-e0bfe0f124d3-lib-modules\") pod \"kube-proxy-m8mb6\" (UID: \"a0febf0b-e580-4df9-bd12-e0bfe0f124d3\") " pod="kube-system/kube-proxy-m8mb6" Oct 2 19:11:31.239886 kubelet[1440]: I1002 19:11:31.239883 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cni-path\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.239928 kubelet[1440]: I1002 19:11:31.239901 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-lib-modules\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.240004 kubelet[1440]: I1002 19:11:31.239986 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/079c025c-7182-4c3b-9417-081eb20ee218-clustermesh-secrets\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.240129 kubelet[1440]: I1002 19:11:31.240113 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-run\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.240320 kubelet[1440]: I1002 19:11:31.240308 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfv2t\" (UniqueName: \"kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-kube-api-access-tfv2t\") pod \"cilium-qg5th\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " pod="kube-system/cilium-qg5th" Oct 2 19:11:31.240426 kubelet[1440]: I1002 19:11:31.240414 1440 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:11:31.395099 kubelet[1440]: E1002 19:11:31.394960 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:31.395772 env[1138]: time="2023-10-02T19:11:31.395701452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg5th,Uid:079c025c-7182-4c3b-9417-081eb20ee218,Namespace:kube-system,Attempt:0,}" Oct 2 19:11:31.683269 kubelet[1440]: E1002 19:11:31.683147 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:31.683840 env[1138]: time="2023-10-02T19:11:31.683784816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8mb6,Uid:a0febf0b-e580-4df9-bd12-e0bfe0f124d3,Namespace:kube-system,Attempt:0,}" Oct 2 19:11:32.020098 env[1138]: time="2023-10-02T19:11:32.019897542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.021347 env[1138]: time="2023-10-02T19:11:32.021306857Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.022305 env[1138]: time="2023-10-02T19:11:32.022272049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.023265 env[1138]: time="2023-10-02T19:11:32.023229366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.025731 env[1138]: time="2023-10-02T19:11:32.025699565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.026717 env[1138]: time="2023-10-02T19:11:32.026691777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.030372 env[1138]: time="2023-10-02T19:11:32.030341575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.032114 env[1138]: time="2023-10-02T19:11:32.032085252Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:32.053321 kubelet[1440]: E1002 19:11:32.053278 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:32.056648 env[1138]: time="2023-10-02T19:11:32.056589459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:11:32.056761 env[1138]: time="2023-10-02T19:11:32.056642302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:11:32.056761 env[1138]: time="2023-10-02T19:11:32.056654693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:11:32.056886 env[1138]: time="2023-10-02T19:11:32.056838881Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3 pid=1546 runtime=io.containerd.runc.v2 Oct 2 19:11:32.058450 env[1138]: time="2023-10-02T19:11:32.058397730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:11:32.058528 env[1138]: time="2023-10-02T19:11:32.058433744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:11:32.058528 env[1138]: time="2023-10-02T19:11:32.058443218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:11:32.058614 env[1138]: time="2023-10-02T19:11:32.058560094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77e6628201fe12e4d59c92016902a665072d6715692b224dd436ec95378a4b01 pid=1545 runtime=io.containerd.runc.v2 Oct 2 19:11:32.080896 systemd[1]: Started cri-containerd-1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3.scope. Oct 2 19:11:32.086933 systemd[1]: Started cri-containerd-77e6628201fe12e4d59c92016902a665072d6715692b224dd436ec95378a4b01.scope. Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116938 kernel: audit: type=1400 audit(1696273892.111:577): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.117003 kernel: audit: type=1400 audit(1696273892.111:578): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.117024 kernel: audit: type=1400 audit(1696273892.111:579): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.120608 kernel: audit: type=1400 audit(1696273892.111:580): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.120654 kernel: audit: type=1400 audit(1696273892.111:581): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.111000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.113000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.113000 audit: BPF prog-id=67 op=LOAD Oct 2 19:11:32.113000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.113000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343561346639383733313434666236343931333037666164306532 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343561346639383733313434666236343931333037666164306532 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.114000 audit: BPF prog-id=68 op=LOAD Oct 2 19:11:32.114000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343561346639383733313434666236343931333037666164306532 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.116000 audit: BPF prog-id=69 op=LOAD Oct 2 19:11:32.116000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343561346639383733313434666236343931333037666164306532 Oct 2 19:11:32.118000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:11:32.118000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { perfmon } for pid=1565 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit[1565]: AVC avc: denied { bpf } for pid=1565 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.118000 audit: BPF prog-id=70 op=LOAD Oct 2 19:11:32.118000 audit[1565]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=1546 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343561346639383733313434666236343931333037666164306532 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit: BPF prog-id=71 op=LOAD Oct 2 19:11:32.122000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.122000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653636323832303166653132653464353963393230313639303261 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653636323832303166653132653464353963393230313639303261 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit: BPF prog-id=72 op=LOAD Oct 2 19:11:32.123000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653636323832303166653132653464353963393230313639303261 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit: BPF prog-id=73 op=LOAD Oct 2 19:11:32.123000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653636323832303166653132653464353963393230313639303261 Oct 2 19:11:32.123000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:11:32.123000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { perfmon } for pid=1566 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit[1566]: AVC avc: denied { bpf } for pid=1566 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:32.123000 audit: BPF prog-id=74 op=LOAD Oct 2 19:11:32.123000 audit[1566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1545 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:32.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737653636323832303166653132653464353963393230313639303261 Oct 2 19:11:32.138488 env[1138]: time="2023-10-02T19:11:32.138443175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg5th,Uid:079c025c-7182-4c3b-9417-081eb20ee218,Namespace:kube-system,Attempt:0,} returns sandbox id \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\"" Oct 2 19:11:32.139925 kubelet[1440]: E1002 19:11:32.139646 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:32.140259 env[1138]: time="2023-10-02T19:11:32.140212634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m8mb6,Uid:a0febf0b-e580-4df9-bd12-e0bfe0f124d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"77e6628201fe12e4d59c92016902a665072d6715692b224dd436ec95378a4b01\"" Oct 2 19:11:32.140830 env[1138]: time="2023-10-02T19:11:32.140802733Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:11:32.140982 kubelet[1440]: E1002 19:11:32.140969 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:32.348169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254146281.mount: Deactivated successfully. Oct 2 19:11:33.054136 kubelet[1440]: E1002 19:11:33.054075 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:34.055216 kubelet[1440]: E1002 19:11:34.055161 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:35.041808 kubelet[1440]: E1002 19:11:35.041750 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:35.056065 kubelet[1440]: E1002 19:11:35.056027 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:35.127471 kubelet[1440]: E1002 19:11:35.127442 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:36.056344 kubelet[1440]: E1002 19:11:36.056287 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:36.072233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247670696.mount: Deactivated successfully. Oct 2 19:11:37.057399 kubelet[1440]: E1002 19:11:37.057325 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:38.058201 kubelet[1440]: E1002 19:11:38.058167 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:38.359084 env[1138]: time="2023-10-02T19:11:38.358789145Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:38.360643 env[1138]: time="2023-10-02T19:11:38.360605346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:38.362700 env[1138]: time="2023-10-02T19:11:38.362666588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:38.363672 env[1138]: time="2023-10-02T19:11:38.363642516Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:11:38.364421 env[1138]: time="2023-10-02T19:11:38.364395671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:11:38.365913 env[1138]: time="2023-10-02T19:11:38.365837853Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:11:38.377885 env[1138]: time="2023-10-02T19:11:38.377835607Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\"" Oct 2 19:11:38.379059 env[1138]: time="2023-10-02T19:11:38.379013037Z" level=info msg="StartContainer for \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\"" Oct 2 19:11:38.396179 systemd[1]: Started cri-containerd-78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773.scope. Oct 2 19:11:38.417166 systemd[1]: cri-containerd-78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773.scope: Deactivated successfully. Oct 2 19:11:38.420523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773-rootfs.mount: Deactivated successfully. Oct 2 19:11:38.566398 env[1138]: time="2023-10-02T19:11:38.566107325Z" level=info msg="shim disconnected" id=78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773 Oct 2 19:11:38.566398 env[1138]: time="2023-10-02T19:11:38.566150145Z" level=warning msg="cleaning up after shim disconnected" id=78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773 namespace=k8s.io Oct 2 19:11:38.566398 env[1138]: time="2023-10-02T19:11:38.566158900Z" level=info msg="cleaning up dead shim" Oct 2 19:11:38.575668 env[1138]: time="2023-10-02T19:11:38.575599691Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:11:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1645 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:11:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:11:38.575959 env[1138]: time="2023-10-02T19:11:38.575856687Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:11:38.576114 env[1138]: time="2023-10-02T19:11:38.576073502Z" level=error msg="Failed to pipe stdout of container \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\"" error="reading from a closed fifo" Oct 2 19:11:38.579129 env[1138]: time="2023-10-02T19:11:38.579074209Z" level=error msg="Failed to pipe stderr of container \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\"" error="reading from a closed fifo" Oct 2 19:11:38.580709 env[1138]: time="2023-10-02T19:11:38.580654285Z" level=error msg="StartContainer for \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:11:38.581070 kubelet[1440]: E1002 19:11:38.581042 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773" Oct 2 19:11:38.581191 kubelet[1440]: E1002 19:11:38.581174 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:11:38.581191 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:11:38.581191 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:11:38.581191 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tfv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:11:38.581355 kubelet[1440]: E1002 19:11:38.581215 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:11:39.058658 kubelet[1440]: E1002 19:11:39.058558 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:39.296206 kubelet[1440]: E1002 19:11:39.296158 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:39.302906 env[1138]: time="2023-10-02T19:11:39.302860881Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:11:39.314611 env[1138]: time="2023-10-02T19:11:39.314376136Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\"" Oct 2 19:11:39.315206 env[1138]: time="2023-10-02T19:11:39.315172055Z" level=info msg="StartContainer for \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\"" Oct 2 19:11:39.333886 systemd[1]: Started cri-containerd-abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25.scope. Oct 2 19:11:39.355008 systemd[1]: cri-containerd-abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25.scope: Deactivated successfully. Oct 2 19:11:39.378843 env[1138]: time="2023-10-02T19:11:39.378790349Z" level=info msg="shim disconnected" id=abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25 Oct 2 19:11:39.379230 env[1138]: time="2023-10-02T19:11:39.379208600Z" level=warning msg="cleaning up after shim disconnected" id=abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25 namespace=k8s.io Oct 2 19:11:39.379302 env[1138]: time="2023-10-02T19:11:39.379287444Z" level=info msg="cleaning up dead shim" Oct 2 19:11:39.387404 env[1138]: time="2023-10-02T19:11:39.387363939Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:11:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1683 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:11:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:11:39.387808 env[1138]: time="2023-10-02T19:11:39.387756721Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Oct 2 19:11:39.389941 env[1138]: time="2023-10-02T19:11:39.388068540Z" level=error msg="Failed to pipe stderr of container \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\"" error="reading from a closed fifo" Oct 2 19:11:39.390078 env[1138]: time="2023-10-02T19:11:39.389576056Z" level=error msg="Failed to pipe stdout of container \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\"" error="reading from a closed fifo" Oct 2 19:11:39.391860 env[1138]: time="2023-10-02T19:11:39.391822676Z" level=error msg="StartContainer for \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:11:39.392143 kubelet[1440]: E1002 19:11:39.392112 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25" Oct 2 19:11:39.392229 kubelet[1440]: E1002 19:11:39.392207 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:11:39.392229 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:11:39.392229 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:11:39.392229 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tfv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:11:39.392447 kubelet[1440]: E1002 19:11:39.392242 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:11:39.603661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945954491.mount: Deactivated successfully. Oct 2 19:11:39.916779 env[1138]: time="2023-10-02T19:11:39.916547955Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:39.917782 env[1138]: time="2023-10-02T19:11:39.917754568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:39.919340 env[1138]: time="2023-10-02T19:11:39.919304185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:39.920490 env[1138]: time="2023-10-02T19:11:39.920462379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:11:39.921529 env[1138]: time="2023-10-02T19:11:39.921494991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:11:39.923022 env[1138]: time="2023-10-02T19:11:39.922991792Z" level=info msg="CreateContainer within sandbox \"77e6628201fe12e4d59c92016902a665072d6715692b224dd436ec95378a4b01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:11:39.931594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896676424.mount: Deactivated successfully. Oct 2 19:11:39.938512 env[1138]: time="2023-10-02T19:11:39.938465251Z" level=info msg="CreateContainer within sandbox \"77e6628201fe12e4d59c92016902a665072d6715692b224dd436ec95378a4b01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e28431e3c9ade5d181a40130213dbdc0bb3ac509f5bc9b96a91c75c7bae1feb5\"" Oct 2 19:11:39.938897 env[1138]: time="2023-10-02T19:11:39.938869947Z" level=info msg="StartContainer for \"e28431e3c9ade5d181a40130213dbdc0bb3ac509f5bc9b96a91c75c7bae1feb5\"" Oct 2 19:11:39.954951 systemd[1]: Started cri-containerd-e28431e3c9ade5d181a40130213dbdc0bb3ac509f5bc9b96a91c75c7bae1feb5.scope. Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981816 kernel: kauditd_printk_skb: 109 callbacks suppressed Oct 2 19:11:39.981873 kernel: audit: type=1400 audit(1696273899.979:613): avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981902 kernel: audit: type=1300 audit(1696273899.979:613): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001475a0 a2=3c a3=0 items=0 ppid=1545 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:39.979000 audit[1704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001475a0 a2=3c a3=0 items=0 ppid=1545 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:39.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532383433316533633961646535643138316134303133303231336462 Oct 2 19:11:39.986590 kernel: audit: type=1327 audit(1696273899.979:613): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532383433316533633961646535643138316134303133303231336462 Oct 2 19:11:39.986633 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.989804 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.989852 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.992999 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.993069 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.996622 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.996717 kernel: audit: type=1400 audit(1696273899.979:614): avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit: BPF prog-id=75 op=LOAD Oct 2 19:11:39.979000 audit[1704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001478e0 a2=78 a3=0 items=0 ppid=1545 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:39.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532383433316533633961646535643138316134303133303231336462 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.979000 audit: BPF prog-id=76 op=LOAD Oct 2 19:11:39.979000 audit[1704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000147670 a2=78 a3=0 items=0 ppid=1545 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:39.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532383433316533633961646535643138316134303133303231336462 Oct 2 19:11:39.981000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:11:39.981000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { perfmon } for pid=1704 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit[1704]: AVC avc: denied { bpf } for pid=1704 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:11:39.981000 audit: BPF prog-id=77 op=LOAD Oct 2 19:11:39.981000 audit[1704]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000147b40 a2=78 a3=0 items=0 ppid=1545 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:39.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532383433316533633961646535643138316134303133303231336462 Oct 2 19:11:40.008852 env[1138]: time="2023-10-02T19:11:40.008027938Z" level=info msg="StartContainer for \"e28431e3c9ade5d181a40130213dbdc0bb3ac509f5bc9b96a91c75c7bae1feb5\" returns successfully" Oct 2 19:11:40.059536 kubelet[1440]: E1002 19:11:40.059486 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:40.077283 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:11:40.077429 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:11:40.077471 kernel: IPVS: ipvs loaded. Oct 2 19:11:40.085854 kernel: IPVS: [rr] scheduler registered. Oct 2 19:11:40.092935 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:11:40.097652 kernel: IPVS: [sh] scheduler registered. Oct 2 19:11:40.131530 kubelet[1440]: E1002 19:11:40.131492 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:40.145000 audit[1765]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.145000 audit[1765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff59fabe0 a2=0 a3=ffff83a526c0 items=0 ppid=1714 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:11:40.146000 audit[1764]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.146000 audit[1764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea7c6380 a2=0 a3=ffffb01286c0 items=0 ppid=1714 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.146000 audit[1766]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=1766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.146000 audit[1766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed1d4080 a2=0 a3=ffffbd27e6c0 items=0 ppid=1714 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:11:40.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:11:40.147000 audit[1767]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=1767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.147000 audit[1767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe366f9d0 a2=0 a3=ffff9d5b56c0 items=0 ppid=1714 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:11:40.148000 audit[1768]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.148000 audit[1768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedff5a30 a2=0 a3=ffff8a5bb6c0 items=0 ppid=1714 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:11:40.151000 audit[1769]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.151000 audit[1769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca62bc40 a2=0 a3=ffff909c76c0 items=0 ppid=1714 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:11:40.248000 audit[1770]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.248000 audit[1770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff3098270 a2=0 a3=ffffa39e26c0 items=0 ppid=1714 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:11:40.253000 audit[1772]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.253000 audit[1772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffce71a7f0 a2=0 a3=ffffa4bfa6c0 items=0 ppid=1714 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:11:40.259000 audit[1775]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.259000 audit[1775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc31e0b90 a2=0 a3=ffff907966c0 items=0 ppid=1714 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:11:40.259000 audit[1776]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.259000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8b43280 a2=0 a3=ffff99ecb6c0 items=0 ppid=1714 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:11:40.262000 audit[1778]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.262000 audit[1778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc32b83a0 a2=0 a3=ffff8fcf46c0 items=0 ppid=1714 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:11:40.264000 audit[1779]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.264000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1176280 a2=0 a3=ffff911a26c0 items=0 ppid=1714 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:11:40.269000 audit[1781]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.269000 audit[1781]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffff8e5d10 a2=0 a3=ffffaced36c0 items=0 ppid=1714 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:11:40.275000 audit[1784]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.275000 audit[1784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd8a57d90 a2=0 a3=ffffa19436c0 items=0 ppid=1714 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:11:40.276000 audit[1785]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.276000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd19e2430 a2=0 a3=ffff9db2c6c0 items=0 ppid=1714 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:11:40.279000 audit[1787]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.279000 audit[1787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd2800160 a2=0 a3=ffff8e0826c0 items=0 ppid=1714 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:11:40.280000 audit[1788]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.280000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff03319c0 a2=0 a3=ffff9a6b56c0 items=0 ppid=1714 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:11:40.284000 audit[1790]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.284000 audit[1790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe645e440 a2=0 a3=ffff895b06c0 items=0 ppid=1714 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:11:40.287000 audit[1793]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.287000 audit[1793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffee580950 a2=0 a3=ffff8f3a76c0 items=0 ppid=1714 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:11:40.293000 audit[1796]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.293000 audit[1796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc815ef90 a2=0 a3=ffff898da6c0 items=0 ppid=1714 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:11:40.295000 audit[1797]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.295000 audit[1797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc73563e0 a2=0 a3=ffff935da6c0 items=0 ppid=1714 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:11:40.298992 kubelet[1440]: E1002 19:11:40.298965 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:40.300748 kubelet[1440]: I1002 19:11:40.300726 1440 scope.go:115] "RemoveContainer" containerID="78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773" Oct 2 19:11:40.301038 kubelet[1440]: I1002 19:11:40.301016 1440 scope.go:115] "RemoveContainer" containerID="78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773" Oct 2 19:11:40.301000 audit[1799]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.301000 audit[1799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffffeb309a0 a2=0 a3=ffff9b1326c0 items=0 ppid=1714 pid=1799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.302796 env[1138]: time="2023-10-02T19:11:40.302732864Z" level=info msg="RemoveContainer for \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\"" Oct 2 19:11:40.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:11:40.303435 env[1138]: time="2023-10-02T19:11:40.303407257Z" level=info msg="RemoveContainer for \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\"" Oct 2 19:11:40.303515 env[1138]: time="2023-10-02T19:11:40.303483705Z" level=error msg="RemoveContainer for \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\" failed" error="failed to set removing state for container \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\": container is already in removing state" Oct 2 19:11:40.303723 kubelet[1440]: E1002 19:11:40.303598 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\": container is already in removing state" containerID="78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773" Oct 2 19:11:40.303723 kubelet[1440]: E1002 19:11:40.303661 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773": container is already in removing state; Skipping pod "cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)" Oct 2 19:11:40.303723 kubelet[1440]: E1002 19:11:40.303718 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:40.303916 kubelet[1440]: E1002 19:11:40.303897 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:11:40.307541 env[1138]: time="2023-10-02T19:11:40.307493319Z" level=info msg="RemoveContainer for \"78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773\" returns successfully" Oct 2 19:11:40.307000 audit[1802]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:11:40.307000 audit[1802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffdcda8de0 a2=0 a3=ffffa42776c0 items=0 ppid=1714 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:11:40.322000 audit[1806]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:11:40.322000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff1e3ddb0 a2=0 a3=ffff8e0876c0 items=0 ppid=1714 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:11:40.344000 audit[1806]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:11:40.344000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff1e3ddb0 a2=0 a3=ffff8e0876c0 items=0 ppid=1714 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:11:40.345000 audit[1810]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.345000 audit[1810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff5a95b20 a2=0 a3=ffff9b7ed6c0 items=0 ppid=1714 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:11:40.349000 audit[1812]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.349000 audit[1812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffed3c7320 a2=0 a3=ffff8b9076c0 items=0 ppid=1714 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:11:40.358000 audit[1815]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.358000 audit[1815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd5f7ea60 a2=0 a3=ffff9d3b36c0 items=0 ppid=1714 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:11:40.360000 audit[1816]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.360000 audit[1816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2ec7750 a2=0 a3=ffff920be6c0 items=0 ppid=1714 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:11:40.363000 audit[1818]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.363000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc34c6d90 a2=0 a3=ffffb2bf76c0 items=0 ppid=1714 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:11:40.365000 audit[1819]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.365000 audit[1819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8a5f630 a2=0 a3=ffffa92886c0 items=0 ppid=1714 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:11:40.368000 audit[1821]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.368000 audit[1821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff21a36e0 a2=0 a3=ffffba1926c0 items=0 ppid=1714 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:11:40.373948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791148900.mount: Deactivated successfully. Oct 2 19:11:40.374000 audit[1824]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.374000 audit[1824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc313d8e0 a2=0 a3=ffff85e2e6c0 items=0 ppid=1714 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:11:40.376000 audit[1825]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1825 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.376000 audit[1825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca8ea140 a2=0 a3=ffffa7dce6c0 items=0 ppid=1714 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:11:40.378000 audit[1827]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.378000 audit[1827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0299fe0 a2=0 a3=ffff8decc6c0 items=0 ppid=1714 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:11:40.381000 audit[1828]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.381000 audit[1828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb5540c0 a2=0 a3=ffff9e5a36c0 items=0 ppid=1714 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:11:40.384000 audit[1830]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.384000 audit[1830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7a27f80 a2=0 a3=ffffbf64e6c0 items=0 ppid=1714 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:11:40.388000 audit[1833]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.388000 audit[1833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5bb45f0 a2=0 a3=ffffb4c056c0 items=0 ppid=1714 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:11:40.392000 audit[1836]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.392000 audit[1836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffbec3140 a2=0 a3=ffffbda526c0 items=0 ppid=1714 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:11:40.393000 audit[1837]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.393000 audit[1837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffb8d0910 a2=0 a3=ffffb6bca6c0 items=0 ppid=1714 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:11:40.396000 audit[1839]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.396000 audit[1839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd1299960 a2=0 a3=ffff931e26c0 items=0 ppid=1714 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:11:40.400000 audit[1842]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:11:40.400000 audit[1842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffee59e0a0 a2=0 a3=ffffba98f6c0 items=0 ppid=1714 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:11:40.405000 audit[1846]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:11:40.405000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffd8f7240 a2=0 a3=ffff84aeb6c0 items=0 ppid=1714 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.405000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:11:40.406000 audit[1846]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:11:40.406000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=fffffd8f7240 a2=0 a3=ffff84aeb6c0 items=0 ppid=1714 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:11:40.406000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:11:41.060675 kubelet[1440]: E1002 19:11:41.060624 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:41.304898 kubelet[1440]: E1002 19:11:41.304849 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:41.305112 kubelet[1440]: E1002 19:11:41.304913 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:41.305146 kubelet[1440]: E1002 19:11:41.305132 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:11:41.686685 kubelet[1440]: W1002 19:11:41.686455 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice/cri-containerd-78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773.scope WatchSource:0}: container "78ec9cdfdb6d0349d012acdcc46b5ef5a9774a8347baab48a92ec7344804b773" in namespace "k8s.io": not found Oct 2 19:11:42.062106 kubelet[1440]: E1002 19:11:42.062003 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:43.062614 kubelet[1440]: E1002 19:11:43.062572 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:44.063387 kubelet[1440]: E1002 19:11:44.063351 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:44.795864 kubelet[1440]: W1002 19:11:44.795821 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice/cri-containerd-abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25.scope WatchSource:0}: task abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25 not found: not found Oct 2 19:11:45.063910 kubelet[1440]: E1002 19:11:45.063812 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:45.132738 kubelet[1440]: E1002 19:11:45.132704 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:46.065214 kubelet[1440]: E1002 19:11:46.065176 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:47.066005 kubelet[1440]: E1002 19:11:47.065954 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:48.067728 kubelet[1440]: E1002 19:11:48.067652 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:49.068383 kubelet[1440]: E1002 19:11:49.068318 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:50.069456 kubelet[1440]: E1002 19:11:50.069422 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:50.134155 kubelet[1440]: E1002 19:11:50.134130 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:51.070276 kubelet[1440]: E1002 19:11:51.070238 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:52.070785 kubelet[1440]: E1002 19:11:52.070727 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:53.070992 kubelet[1440]: E1002 19:11:53.070946 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:54.074091 kubelet[1440]: E1002 19:11:54.071764 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:54.979445 update_engine[1130]: I1002 19:11:54.979384 1130 update_attempter.cc:505] Updating boot flags... Oct 2 19:11:55.042471 kubelet[1440]: E1002 19:11:55.042403 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:55.074734 kubelet[1440]: E1002 19:11:55.074698 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:55.135686 kubelet[1440]: E1002 19:11:55.135659 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:11:56.075688 kubelet[1440]: E1002 19:11:56.075638 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:56.259101 kubelet[1440]: E1002 19:11:56.259069 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:56.261680 env[1138]: time="2023-10-02T19:11:56.261624725Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:11:56.273175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3511800871.mount: Deactivated successfully. Oct 2 19:11:56.275541 env[1138]: time="2023-10-02T19:11:56.275412318Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\"" Oct 2 19:11:56.275963 env[1138]: time="2023-10-02T19:11:56.275930239Z" level=info msg="StartContainer for \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\"" Oct 2 19:11:56.298044 systemd[1]: Started cri-containerd-338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90.scope. Oct 2 19:11:56.315803 systemd[1]: cri-containerd-338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90.scope: Deactivated successfully. Oct 2 19:11:56.419309 env[1138]: time="2023-10-02T19:11:56.419177109Z" level=info msg="shim disconnected" id=338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90 Oct 2 19:11:56.419309 env[1138]: time="2023-10-02T19:11:56.419237140Z" level=warning msg="cleaning up after shim disconnected" id=338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90 namespace=k8s.io Oct 2 19:11:56.419309 env[1138]: time="2023-10-02T19:11:56.419246859Z" level=info msg="cleaning up dead shim" Oct 2 19:11:56.428371 env[1138]: time="2023-10-02T19:11:56.428313246Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:11:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1887 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:11:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:11:56.428650 env[1138]: time="2023-10-02T19:11:56.428580365Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:11:56.433750 env[1138]: time="2023-10-02T19:11:56.433706709Z" level=error msg="Failed to pipe stdout of container \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\"" error="reading from a closed fifo" Oct 2 19:11:56.433888 env[1138]: time="2023-10-02T19:11:56.433736465Z" level=error msg="Failed to pipe stderr of container \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\"" error="reading from a closed fifo" Oct 2 19:11:56.435908 env[1138]: time="2023-10-02T19:11:56.435848705Z" level=error msg="StartContainer for \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:11:56.436168 kubelet[1440]: E1002 19:11:56.436132 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90" Oct 2 19:11:56.436300 kubelet[1440]: E1002 19:11:56.436271 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:11:56.436300 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:11:56.436300 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:11:56.436300 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tfv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:11:56.436442 kubelet[1440]: E1002 19:11:56.436342 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:11:57.077922 kubelet[1440]: E1002 19:11:57.077877 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:57.271562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90-rootfs.mount: Deactivated successfully. Oct 2 19:11:57.337851 kubelet[1440]: I1002 19:11:57.337748 1440 scope.go:115] "RemoveContainer" containerID="abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25" Oct 2 19:11:57.338032 kubelet[1440]: I1002 19:11:57.338000 1440 scope.go:115] "RemoveContainer" containerID="abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25" Oct 2 19:11:57.341119 env[1138]: time="2023-10-02T19:11:57.340784660Z" level=info msg="RemoveContainer for \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\"" Oct 2 19:11:57.342509 env[1138]: time="2023-10-02T19:11:57.342361476Z" level=info msg="RemoveContainer for \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\"" Oct 2 19:11:57.342509 env[1138]: time="2023-10-02T19:11:57.342435465Z" level=error msg="RemoveContainer for \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\" failed" error="rpc error: code = NotFound desc = get container info: container \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\" in namespace \"k8s.io\": not found" Oct 2 19:11:57.343042 env[1138]: time="2023-10-02T19:11:57.342955471Z" level=info msg="RemoveContainer for \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\" returns successfully" Oct 2 19:11:57.344044 kubelet[1440]: E1002 19:11:57.344015 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25\" in namespace \"k8s.io\": not found" containerID="abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25" Oct 2 19:11:57.344116 kubelet[1440]: E1002 19:11:57.344070 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "abb25711aea0601899f78f21b306e733d1184529f76e54fc57e2d6a84982ae25" in namespace "k8s.io": not found; Skipping pod "cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)" Oct 2 19:11:57.344159 kubelet[1440]: E1002 19:11:57.344143 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:11:57.344390 kubelet[1440]: E1002 19:11:57.344378 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:11:58.078860 kubelet[1440]: E1002 19:11:58.078789 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:59.078988 kubelet[1440]: E1002 19:11:59.078929 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:11:59.527909 kubelet[1440]: W1002 19:11:59.527796 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice/cri-containerd-338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90.scope WatchSource:0}: task 338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90 not found: not found Oct 2 19:12:00.079428 kubelet[1440]: E1002 19:12:00.079387 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:00.136369 kubelet[1440]: E1002 19:12:00.136341 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:01.080352 kubelet[1440]: E1002 19:12:01.080305 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:02.086249 kubelet[1440]: E1002 19:12:02.086190 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:03.087264 kubelet[1440]: E1002 19:12:03.087218 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:04.088216 kubelet[1440]: E1002 19:12:04.088174 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:05.088772 kubelet[1440]: E1002 19:12:05.088727 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:05.137274 kubelet[1440]: E1002 19:12:05.137249 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:06.089501 kubelet[1440]: E1002 19:12:06.089457 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:07.090510 kubelet[1440]: E1002 19:12:07.090453 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:08.091134 kubelet[1440]: E1002 19:12:08.091089 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:08.258425 kubelet[1440]: E1002 19:12:08.258391 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:12:08.260552 kubelet[1440]: E1002 19:12:08.258712 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:12:09.091949 kubelet[1440]: E1002 19:12:09.091905 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:10.092476 kubelet[1440]: E1002 19:12:10.092437 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:10.138038 kubelet[1440]: E1002 19:12:10.138018 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:11.093695 kubelet[1440]: E1002 19:12:11.093653 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:12.094194 kubelet[1440]: E1002 19:12:12.094157 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:13.095045 kubelet[1440]: E1002 19:12:13.094986 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:14.095876 kubelet[1440]: E1002 19:12:14.095839 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:15.041712 kubelet[1440]: E1002 19:12:15.041683 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:15.096560 kubelet[1440]: E1002 19:12:15.096520 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:15.138769 kubelet[1440]: E1002 19:12:15.138745 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:16.097598 kubelet[1440]: E1002 19:12:16.097562 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:17.098417 kubelet[1440]: E1002 19:12:17.098351 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:18.099530 kubelet[1440]: E1002 19:12:18.099482 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:19.099657 kubelet[1440]: E1002 19:12:19.099615 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:19.259460 kubelet[1440]: E1002 19:12:19.259422 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:12:19.261602 env[1138]: time="2023-10-02T19:12:19.261547187Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:12:19.270541 env[1138]: time="2023-10-02T19:12:19.270493212Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\"" Oct 2 19:12:19.271205 env[1138]: time="2023-10-02T19:12:19.271169162Z" level=info msg="StartContainer for \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\"" Oct 2 19:12:19.288177 systemd[1]: Started cri-containerd-01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99.scope. Oct 2 19:12:19.332073 systemd[1]: cri-containerd-01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99.scope: Deactivated successfully. Oct 2 19:12:19.335496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99-rootfs.mount: Deactivated successfully. Oct 2 19:12:19.340692 env[1138]: time="2023-10-02T19:12:19.340641434Z" level=info msg="shim disconnected" id=01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99 Oct 2 19:12:19.340692 env[1138]: time="2023-10-02T19:12:19.340693993Z" level=warning msg="cleaning up after shim disconnected" id=01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99 namespace=k8s.io Oct 2 19:12:19.340947 env[1138]: time="2023-10-02T19:12:19.340702753Z" level=info msg="cleaning up dead shim" Oct 2 19:12:19.348651 env[1138]: time="2023-10-02T19:12:19.348599434Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:12:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1927 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:12:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:12:19.348909 env[1138]: time="2023-10-02T19:12:19.348852870Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:12:19.349071 env[1138]: time="2023-10-02T19:12:19.349019027Z" level=error msg="Failed to pipe stdout of container \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\"" error="reading from a closed fifo" Oct 2 19:12:19.349137 env[1138]: time="2023-10-02T19:12:19.349107466Z" level=error msg="Failed to pipe stderr of container \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\"" error="reading from a closed fifo" Oct 2 19:12:19.350545 env[1138]: time="2023-10-02T19:12:19.350459326Z" level=error msg="StartContainer for \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:12:19.350868 kubelet[1440]: E1002 19:12:19.350846 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99" Oct 2 19:12:19.351194 kubelet[1440]: E1002 19:12:19.351143 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:12:19.351194 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:12:19.351194 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:12:19.351194 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tfv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:12:19.351510 kubelet[1440]: E1002 19:12:19.351187 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:12:19.370016 kubelet[1440]: I1002 19:12:19.369995 1440 scope.go:115] "RemoveContainer" containerID="338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90" Oct 2 19:12:19.370311 kubelet[1440]: I1002 19:12:19.370289 1440 scope.go:115] "RemoveContainer" containerID="338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90" Oct 2 19:12:19.371314 env[1138]: time="2023-10-02T19:12:19.371282772Z" level=info msg="RemoveContainer for \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\"" Oct 2 19:12:19.371749 env[1138]: time="2023-10-02T19:12:19.371725205Z" level=info msg="RemoveContainer for \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\"" Oct 2 19:12:19.371841 env[1138]: time="2023-10-02T19:12:19.371812084Z" level=error msg="RemoveContainer for \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\" failed" error="failed to set removing state for container \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\": container is already in removing state" Oct 2 19:12:19.371984 kubelet[1440]: E1002 19:12:19.371971 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\": container is already in removing state" containerID="338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90" Oct 2 19:12:19.372075 kubelet[1440]: E1002 19:12:19.371998 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90": container is already in removing state; Skipping pod "cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)" Oct 2 19:12:19.372116 kubelet[1440]: E1002 19:12:19.372081 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:12:19.372467 kubelet[1440]: E1002 19:12:19.372441 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:12:19.373681 env[1138]: time="2023-10-02T19:12:19.373617376Z" level=info msg="RemoveContainer for \"338896357796379d15c7ffd710f84b0f908d1ec55a5a0f5a37bf36bad695ea90\" returns successfully" Oct 2 19:12:20.100203 kubelet[1440]: E1002 19:12:20.100138 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:20.140477 kubelet[1440]: E1002 19:12:20.140454 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:21.100511 kubelet[1440]: E1002 19:12:21.100450 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:22.101319 kubelet[1440]: E1002 19:12:22.101272 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:22.445325 kubelet[1440]: W1002 19:12:22.445220 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice/cri-containerd-01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99.scope WatchSource:0}: task 01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99 not found: not found Oct 2 19:12:23.101655 kubelet[1440]: E1002 19:12:23.101594 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:24.102072 kubelet[1440]: E1002 19:12:24.102031 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:25.102994 kubelet[1440]: E1002 19:12:25.102938 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:25.141442 kubelet[1440]: E1002 19:12:25.141412 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:26.103456 kubelet[1440]: E1002 19:12:26.103413 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:27.103641 kubelet[1440]: E1002 19:12:27.103570 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:28.104022 kubelet[1440]: E1002 19:12:28.103958 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:29.105148 kubelet[1440]: E1002 19:12:29.105097 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:30.105561 kubelet[1440]: E1002 19:12:30.105497 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:30.142118 kubelet[1440]: E1002 19:12:30.142095 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:31.106262 kubelet[1440]: E1002 19:12:31.106212 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:32.107090 kubelet[1440]: E1002 19:12:32.107037 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:33.107497 kubelet[1440]: E1002 19:12:33.107444 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:34.108320 kubelet[1440]: E1002 19:12:34.108252 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:34.258470 kubelet[1440]: E1002 19:12:34.258421 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:12:34.258866 kubelet[1440]: E1002 19:12:34.258626 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:12:35.042310 kubelet[1440]: E1002 19:12:35.042274 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:35.108814 kubelet[1440]: E1002 19:12:35.108769 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:35.142767 kubelet[1440]: E1002 19:12:35.142740 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:36.109136 kubelet[1440]: E1002 19:12:36.109092 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:37.109552 kubelet[1440]: E1002 19:12:37.109507 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:38.110169 kubelet[1440]: E1002 19:12:38.110127 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:39.111142 kubelet[1440]: E1002 19:12:39.111087 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:40.112118 kubelet[1440]: E1002 19:12:40.112068 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:40.143892 kubelet[1440]: E1002 19:12:40.143872 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:41.113034 kubelet[1440]: E1002 19:12:41.112964 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:42.114837 kubelet[1440]: E1002 19:12:42.114786 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:43.115868 kubelet[1440]: E1002 19:12:43.115828 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:44.116896 kubelet[1440]: E1002 19:12:44.116851 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:45.118295 kubelet[1440]: E1002 19:12:45.117832 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:45.144416 kubelet[1440]: E1002 19:12:45.144390 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:46.118413 kubelet[1440]: E1002 19:12:46.118344 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:47.118777 kubelet[1440]: E1002 19:12:47.118719 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:47.259368 kubelet[1440]: E1002 19:12:47.259308 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:12:47.259527 kubelet[1440]: E1002 19:12:47.259510 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:12:48.119651 kubelet[1440]: E1002 19:12:48.119582 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:49.120473 kubelet[1440]: E1002 19:12:49.120411 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:50.121487 kubelet[1440]: E1002 19:12:50.121441 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:50.145116 kubelet[1440]: E1002 19:12:50.145090 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:51.122239 kubelet[1440]: E1002 19:12:51.122187 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:52.123623 kubelet[1440]: E1002 19:12:52.122436 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:53.123343 kubelet[1440]: E1002 19:12:53.123278 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:54.123438 kubelet[1440]: E1002 19:12:54.123377 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:55.041912 kubelet[1440]: E1002 19:12:55.041814 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:55.124204 kubelet[1440]: E1002 19:12:55.124137 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:55.146658 kubelet[1440]: E1002 19:12:55.146608 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:12:56.124751 kubelet[1440]: E1002 19:12:56.124676 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:57.125721 kubelet[1440]: E1002 19:12:57.125684 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:58.127231 kubelet[1440]: E1002 19:12:58.127194 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:12:59.128264 kubelet[1440]: E1002 19:12:59.128231 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:00.129219 kubelet[1440]: E1002 19:13:00.129184 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:00.148070 kubelet[1440]: E1002 19:13:00.148045 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:01.129784 kubelet[1440]: E1002 19:13:01.129722 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:01.258732 kubelet[1440]: E1002 19:13:01.258703 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:01.267172 env[1138]: time="2023-10-02T19:13:01.267126750Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:13:01.275682 env[1138]: time="2023-10-02T19:13:01.275619464Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\"" Oct 2 19:13:01.276177 env[1138]: time="2023-10-02T19:13:01.276150261Z" level=info msg="StartContainer for \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\"" Oct 2 19:13:01.293722 systemd[1]: Started cri-containerd-99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084.scope. Oct 2 19:13:01.307620 systemd[1]: cri-containerd-99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084.scope: Deactivated successfully. Oct 2 19:13:01.310748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084-rootfs.mount: Deactivated successfully. Oct 2 19:13:01.315615 env[1138]: time="2023-10-02T19:13:01.315568369Z" level=info msg="shim disconnected" id=99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084 Oct 2 19:13:01.315798 env[1138]: time="2023-10-02T19:13:01.315617088Z" level=warning msg="cleaning up after shim disconnected" id=99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084 namespace=k8s.io Oct 2 19:13:01.315798 env[1138]: time="2023-10-02T19:13:01.315634408Z" level=info msg="cleaning up dead shim" Oct 2 19:13:01.323474 env[1138]: time="2023-10-02T19:13:01.323427886Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:13:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1967 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:13:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:13:01.323753 env[1138]: time="2023-10-02T19:13:01.323696205Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:13:01.324055 env[1138]: time="2023-10-02T19:13:01.324017843Z" level=error msg="Failed to pipe stderr of container \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\"" error="reading from a closed fifo" Oct 2 19:13:01.324586 env[1138]: time="2023-10-02T19:13:01.324539120Z" level=error msg="Failed to pipe stdout of container \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\"" error="reading from a closed fifo" Oct 2 19:13:01.326542 env[1138]: time="2023-10-02T19:13:01.326484590Z" level=error msg="StartContainer for \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:13:01.326739 kubelet[1440]: E1002 19:13:01.326713 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084" Oct 2 19:13:01.326842 kubelet[1440]: E1002 19:13:01.326828 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:13:01.326842 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:13:01.326842 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:13:01.326842 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tfv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:13:01.326980 kubelet[1440]: E1002 19:13:01.326868 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:13:01.440494 kubelet[1440]: I1002 19:13:01.439528 1440 scope.go:115] "RemoveContainer" containerID="01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99" Oct 2 19:13:01.440494 kubelet[1440]: I1002 19:13:01.439864 1440 scope.go:115] "RemoveContainer" containerID="01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99" Oct 2 19:13:01.442132 env[1138]: time="2023-10-02T19:13:01.442099846Z" level=info msg="RemoveContainer for \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\"" Oct 2 19:13:01.442251 env[1138]: time="2023-10-02T19:13:01.442226005Z" level=info msg="RemoveContainer for \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\"" Oct 2 19:13:01.442334 env[1138]: time="2023-10-02T19:13:01.442307325Z" level=error msg="RemoveContainer for \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\" failed" error="failed to set removing state for container \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\": container is already in removing state" Oct 2 19:13:01.442434 kubelet[1440]: E1002 19:13:01.442420 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\": container is already in removing state" containerID="01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99" Oct 2 19:13:01.442474 kubelet[1440]: E1002 19:13:01.442449 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99": container is already in removing state; Skipping pod "cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)" Oct 2 19:13:01.442526 kubelet[1440]: E1002 19:13:01.442514 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:01.442745 kubelet[1440]: E1002 19:13:01.442731 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:13:01.444319 env[1138]: time="2023-10-02T19:13:01.444292354Z" level=info msg="RemoveContainer for \"01cd02bb718e2d4eaa51a95d4abf3fdb50646ce74af8e1d573a5a9796ee96e99\" returns successfully" Oct 2 19:13:02.130697 kubelet[1440]: E1002 19:13:02.130646 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:03.131713 kubelet[1440]: E1002 19:13:03.131663 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:04.131942 kubelet[1440]: E1002 19:13:04.131871 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:04.420277 kubelet[1440]: W1002 19:13:04.420170 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice/cri-containerd-99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084.scope WatchSource:0}: task 99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084 not found: not found Oct 2 19:13:05.132570 kubelet[1440]: E1002 19:13:05.132516 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:05.148990 kubelet[1440]: E1002 19:13:05.148966 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:06.133506 kubelet[1440]: E1002 19:13:06.132989 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:07.133430 kubelet[1440]: E1002 19:13:07.133362 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:08.133879 kubelet[1440]: E1002 19:13:08.133830 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:09.134521 kubelet[1440]: E1002 19:13:09.134453 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:09.258886 kubelet[1440]: E1002 19:13:09.258845 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:10.135654 kubelet[1440]: E1002 19:13:10.135586 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:10.150219 kubelet[1440]: E1002 19:13:10.150186 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:11.136477 kubelet[1440]: E1002 19:13:11.136400 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:12.137272 kubelet[1440]: E1002 19:13:12.137205 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:13.138156 kubelet[1440]: E1002 19:13:13.138110 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:13.260039 kubelet[1440]: E1002 19:13:13.260007 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:13.260244 kubelet[1440]: E1002 19:13:13.260218 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:13:14.138325 kubelet[1440]: E1002 19:13:14.138275 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:15.042574 kubelet[1440]: E1002 19:13:15.042533 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:15.139015 kubelet[1440]: E1002 19:13:15.138986 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:15.151567 kubelet[1440]: E1002 19:13:15.151537 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:16.140069 kubelet[1440]: E1002 19:13:16.140019 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:17.140929 kubelet[1440]: E1002 19:13:17.140860 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:18.142092 kubelet[1440]: E1002 19:13:18.142019 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:19.142416 kubelet[1440]: E1002 19:13:19.142350 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:20.142577 kubelet[1440]: E1002 19:13:20.142512 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:20.152386 kubelet[1440]: E1002 19:13:20.152357 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:21.143474 kubelet[1440]: E1002 19:13:21.143388 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:22.144574 kubelet[1440]: E1002 19:13:22.144528 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:23.145195 kubelet[1440]: E1002 19:13:23.145146 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:24.146058 kubelet[1440]: E1002 19:13:24.146015 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:25.146443 kubelet[1440]: E1002 19:13:25.146393 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:25.153072 kubelet[1440]: E1002 19:13:25.153041 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:26.147431 kubelet[1440]: E1002 19:13:26.147388 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:27.147967 kubelet[1440]: E1002 19:13:27.147894 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:28.148910 kubelet[1440]: E1002 19:13:28.148869 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:28.258991 kubelet[1440]: E1002 19:13:28.258960 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:28.259190 kubelet[1440]: E1002 19:13:28.259164 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:13:29.149918 kubelet[1440]: E1002 19:13:29.149874 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:30.151001 kubelet[1440]: E1002 19:13:30.150958 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:30.154523 kubelet[1440]: E1002 19:13:30.154499 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:31.151278 kubelet[1440]: E1002 19:13:31.151208 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:32.151404 kubelet[1440]: E1002 19:13:32.151344 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:33.152325 kubelet[1440]: E1002 19:13:33.152238 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:34.152854 kubelet[1440]: E1002 19:13:34.152809 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:35.041709 kubelet[1440]: E1002 19:13:35.041656 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:35.153464 kubelet[1440]: E1002 19:13:35.153420 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:35.154989 kubelet[1440]: E1002 19:13:35.154966 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:36.154500 kubelet[1440]: E1002 19:13:36.154435 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:37.155402 kubelet[1440]: E1002 19:13:37.155345 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:38.155792 kubelet[1440]: E1002 19:13:38.155716 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:39.156848 kubelet[1440]: E1002 19:13:39.156776 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:40.156365 kubelet[1440]: E1002 19:13:40.156340 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:40.157542 kubelet[1440]: E1002 19:13:40.157514 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:41.158335 kubelet[1440]: E1002 19:13:41.158291 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:41.260164 kubelet[1440]: E1002 19:13:41.260129 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:41.260904 kubelet[1440]: E1002 19:13:41.260878 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:13:42.159123 kubelet[1440]: E1002 19:13:42.158960 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:43.159317 kubelet[1440]: E1002 19:13:43.159265 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:44.159711 kubelet[1440]: E1002 19:13:44.159673 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:45.157339 kubelet[1440]: E1002 19:13:45.157314 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:45.160482 kubelet[1440]: E1002 19:13:45.160464 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:46.160587 kubelet[1440]: E1002 19:13:46.160544 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:47.160862 kubelet[1440]: E1002 19:13:47.160822 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:48.162680 kubelet[1440]: E1002 19:13:48.161887 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:49.162776 kubelet[1440]: E1002 19:13:49.162738 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:50.158358 kubelet[1440]: E1002 19:13:50.158331 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:50.163531 kubelet[1440]: E1002 19:13:50.163506 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:51.163967 kubelet[1440]: E1002 19:13:51.163932 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:52.166366 kubelet[1440]: E1002 19:13:52.166319 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:53.167350 kubelet[1440]: E1002 19:13:53.167305 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:53.258732 kubelet[1440]: E1002 19:13:53.258697 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:13:53.258947 kubelet[1440]: E1002 19:13:53.258929 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:13:54.168311 kubelet[1440]: E1002 19:13:54.168265 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:55.042377 kubelet[1440]: E1002 19:13:55.042325 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:55.159585 kubelet[1440]: E1002 19:13:55.159559 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:13:55.169132 kubelet[1440]: E1002 19:13:55.169089 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:56.169842 kubelet[1440]: E1002 19:13:56.169802 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:57.170329 kubelet[1440]: E1002 19:13:57.170285 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:58.171189 kubelet[1440]: E1002 19:13:58.171143 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:13:59.172053 kubelet[1440]: E1002 19:13:59.172015 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:00.161217 kubelet[1440]: E1002 19:14:00.161181 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:00.172782 kubelet[1440]: E1002 19:14:00.172753 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:01.173384 kubelet[1440]: E1002 19:14:01.173340 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:02.174481 kubelet[1440]: E1002 19:14:02.173885 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:03.174340 kubelet[1440]: E1002 19:14:03.174293 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:04.174805 kubelet[1440]: E1002 19:14:04.174750 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:05.162621 kubelet[1440]: E1002 19:14:05.162461 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:05.175741 kubelet[1440]: E1002 19:14:05.175698 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:06.176862 kubelet[1440]: E1002 19:14:06.176769 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:07.177303 kubelet[1440]: E1002 19:14:07.177238 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:08.178000 kubelet[1440]: E1002 19:14:08.177953 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:08.258396 kubelet[1440]: E1002 19:14:08.258358 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:08.258618 kubelet[1440]: E1002 19:14:08.258592 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:14:09.178841 kubelet[1440]: E1002 19:14:09.178793 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:10.164005 kubelet[1440]: E1002 19:14:10.163977 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:10.179432 kubelet[1440]: E1002 19:14:10.179398 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:11.180441 kubelet[1440]: E1002 19:14:11.180389 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:12.180749 kubelet[1440]: E1002 19:14:12.180697 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:13.181615 kubelet[1440]: E1002 19:14:13.181570 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:14.182683 kubelet[1440]: E1002 19:14:14.182626 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:15.041964 kubelet[1440]: E1002 19:14:15.041928 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:15.165040 kubelet[1440]: E1002 19:14:15.165012 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:15.183180 kubelet[1440]: E1002 19:14:15.183145 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:16.183751 kubelet[1440]: E1002 19:14:16.183715 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:17.184209 kubelet[1440]: E1002 19:14:17.184162 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:18.184446 kubelet[1440]: E1002 19:14:18.184411 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:19.185293 kubelet[1440]: E1002 19:14:19.185242 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:19.258419 kubelet[1440]: E1002 19:14:19.258390 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:19.258818 kubelet[1440]: E1002 19:14:19.258793 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:14:20.166478 kubelet[1440]: E1002 19:14:20.166441 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:20.185625 kubelet[1440]: E1002 19:14:20.185600 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:21.186365 kubelet[1440]: E1002 19:14:21.186321 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:22.187066 kubelet[1440]: E1002 19:14:22.187029 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:23.188127 kubelet[1440]: E1002 19:14:23.188073 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:24.189249 kubelet[1440]: E1002 19:14:24.189206 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:25.167592 kubelet[1440]: E1002 19:14:25.167560 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:25.189961 kubelet[1440]: E1002 19:14:25.189931 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:26.191138 kubelet[1440]: E1002 19:14:26.191093 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:27.191925 kubelet[1440]: E1002 19:14:27.191859 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:28.192968 kubelet[1440]: E1002 19:14:28.192906 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:29.193535 kubelet[1440]: E1002 19:14:29.193484 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:30.168952 kubelet[1440]: E1002 19:14:30.168927 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:30.194169 kubelet[1440]: E1002 19:14:30.194134 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:31.194695 kubelet[1440]: E1002 19:14:31.194645 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:31.259238 kubelet[1440]: E1002 19:14:31.259200 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:31.259391 kubelet[1440]: E1002 19:14:31.259204 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:31.266654 env[1138]: time="2023-10-02T19:14:31.266585524Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:14:31.275368 env[1138]: time="2023-10-02T19:14:31.275312970Z" level=info msg="CreateContainer within sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\"" Oct 2 19:14:31.275807 env[1138]: time="2023-10-02T19:14:31.275775334Z" level=info msg="StartContainer for \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\"" Oct 2 19:14:31.295899 systemd[1]: Started cri-containerd-230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7.scope. Oct 2 19:14:31.340919 systemd[1]: cri-containerd-230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7.scope: Deactivated successfully. Oct 2 19:14:31.344384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7-rootfs.mount: Deactivated successfully. Oct 2 19:14:31.349462 env[1138]: time="2023-10-02T19:14:31.349412415Z" level=info msg="shim disconnected" id=230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7 Oct 2 19:14:31.349667 env[1138]: time="2023-10-02T19:14:31.349466215Z" level=warning msg="cleaning up after shim disconnected" id=230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7 namespace=k8s.io Oct 2 19:14:31.349667 env[1138]: time="2023-10-02T19:14:31.349476816Z" level=info msg="cleaning up dead shim" Oct 2 19:14:31.357780 env[1138]: time="2023-10-02T19:14:31.357732216Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2015 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:31.358037 env[1138]: time="2023-10-02T19:14:31.357968899Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:14:31.358212 env[1138]: time="2023-10-02T19:14:31.358163501Z" level=error msg="Failed to pipe stdout of container \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\"" error="reading from a closed fifo" Oct 2 19:14:31.361764 env[1138]: time="2023-10-02T19:14:31.361711055Z" level=error msg="Failed to pipe stderr of container \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\"" error="reading from a closed fifo" Oct 2 19:14:31.363127 env[1138]: time="2023-10-02T19:14:31.363079629Z" level=error msg="StartContainer for \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:31.363345 kubelet[1440]: E1002 19:14:31.363313 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7" Oct 2 19:14:31.363430 kubelet[1440]: E1002 19:14:31.363416 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:31.363430 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:31.363430 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:14:31.363430 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tfv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:31.363552 kubelet[1440]: E1002 19:14:31.363454 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:14:31.568756 kubelet[1440]: I1002 19:14:31.568664 1440 scope.go:115] "RemoveContainer" containerID="99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084" Oct 2 19:14:31.569126 kubelet[1440]: I1002 19:14:31.568941 1440 scope.go:115] "RemoveContainer" containerID="99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084" Oct 2 19:14:31.569694 env[1138]: time="2023-10-02T19:14:31.569663291Z" level=info msg="RemoveContainer for \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\"" Oct 2 19:14:31.569894 env[1138]: time="2023-10-02T19:14:31.569868973Z" level=info msg="RemoveContainer for \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\"" Oct 2 19:14:31.569976 env[1138]: time="2023-10-02T19:14:31.569947934Z" level=error msg="RemoveContainer for \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\" failed" error="failed to set removing state for container \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\": container is already in removing state" Oct 2 19:14:31.570120 kubelet[1440]: E1002 19:14:31.570103 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\": container is already in removing state" containerID="99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084" Oct 2 19:14:31.570168 kubelet[1440]: E1002 19:14:31.570136 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084": container is already in removing state; Skipping pod "cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)" Oct 2 19:14:31.570203 kubelet[1440]: E1002 19:14:31.570192 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:31.570431 kubelet[1440]: E1002 19:14:31.570383 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-qg5th_kube-system(079c025c-7182-4c3b-9417-081eb20ee218)\"" pod="kube-system/cilium-qg5th" podUID=079c025c-7182-4c3b-9417-081eb20ee218 Oct 2 19:14:31.572794 env[1138]: time="2023-10-02T19:14:31.572755841Z" level=info msg="RemoveContainer for \"99fa809fbfa6891a14f1bd97cbbe69494ed756b83d16e97350d718275df9e084\" returns successfully" Oct 2 19:14:32.195687 kubelet[1440]: E1002 19:14:32.195609 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:33.196262 kubelet[1440]: E1002 19:14:33.196191 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:34.197319 kubelet[1440]: E1002 19:14:34.197254 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:34.455228 kubelet[1440]: W1002 19:14:34.454967 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice/cri-containerd-230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7.scope WatchSource:0}: task 230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7 not found: not found Oct 2 19:14:34.687539 env[1138]: time="2023-10-02T19:14:34.687460414Z" level=info msg="StopPodSandbox for \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\"" Oct 2 19:14:34.687539 env[1138]: time="2023-10-02T19:14:34.687535895Z" level=info msg="Container to stop \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:14:34.688771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3-shm.mount: Deactivated successfully. Oct 2 19:14:34.695381 systemd[1]: cri-containerd-1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3.scope: Deactivated successfully. Oct 2 19:14:34.696954 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:14:34.697052 kernel: audit: type=1334 audit(1696274074.693:663): prog-id=67 op=UNLOAD Oct 2 19:14:34.693000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:14:34.696000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:14:34.699670 kernel: audit: type=1334 audit(1696274074.696:664): prog-id=70 op=UNLOAD Oct 2 19:14:34.719541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3-rootfs.mount: Deactivated successfully. Oct 2 19:14:34.726062 env[1138]: time="2023-10-02T19:14:34.726004774Z" level=info msg="shim disconnected" id=1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3 Oct 2 19:14:34.726687 env[1138]: time="2023-10-02T19:14:34.726658180Z" level=warning msg="cleaning up after shim disconnected" id=1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3 namespace=k8s.io Oct 2 19:14:34.726775 env[1138]: time="2023-10-02T19:14:34.726760941Z" level=info msg="cleaning up dead shim" Oct 2 19:14:34.735393 env[1138]: time="2023-10-02T19:14:34.735360421Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2047 runtime=io.containerd.runc.v2\n" Oct 2 19:14:34.735799 env[1138]: time="2023-10-02T19:14:34.735767264Z" level=info msg="TearDown network for sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" successfully" Oct 2 19:14:34.735901 env[1138]: time="2023-10-02T19:14:34.735878466Z" level=info msg="StopPodSandbox for \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" returns successfully" Oct 2 19:14:34.748090 kubelet[1440]: I1002 19:14:34.748054 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cni-path\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748090 kubelet[1440]: I1002 19:14:34.748093 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-cgroup\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748296 kubelet[1440]: I1002 19:14:34.748118 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/079c025c-7182-4c3b-9417-081eb20ee218-cilium-config-path\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748296 kubelet[1440]: I1002 19:14:34.748136 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-kernel\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748296 kubelet[1440]: I1002 19:14:34.748157 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/079c025c-7182-4c3b-9417-081eb20ee218-clustermesh-secrets\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748296 kubelet[1440]: I1002 19:14:34.748177 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-xtables-lock\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748296 kubelet[1440]: I1002 19:14:34.748198 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-hubble-tls\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748296 kubelet[1440]: I1002 19:14:34.748215 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-net\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748450 kubelet[1440]: I1002 19:14:34.748232 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-lib-modules\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748450 kubelet[1440]: I1002 19:14:34.748249 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-bpf-maps\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748450 kubelet[1440]: I1002 19:14:34.748265 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-hostproc\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748450 kubelet[1440]: I1002 19:14:34.748284 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-etc-cni-netd\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748450 kubelet[1440]: I1002 19:14:34.748300 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-run\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748450 kubelet[1440]: I1002 19:14:34.748319 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfv2t\" (UniqueName: \"kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-kube-api-access-tfv2t\") pod \"079c025c-7182-4c3b-9417-081eb20ee218\" (UID: \"079c025c-7182-4c3b-9417-081eb20ee218\") " Oct 2 19:14:34.748960 kubelet[1440]: I1002 19:14:34.748768 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.748960 kubelet[1440]: I1002 19:14:34.748816 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.748960 kubelet[1440]: W1002 19:14:34.748803 1440 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/079c025c-7182-4c3b-9417-081eb20ee218/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:14:34.748960 kubelet[1440]: I1002 19:14:34.748833 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cni-path" (OuterVolumeSpecName: "cni-path") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.748960 kubelet[1440]: I1002 19:14:34.748873 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.749182 kubelet[1440]: I1002 19:14:34.748884 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.749182 kubelet[1440]: I1002 19:14:34.748894 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.749182 kubelet[1440]: I1002 19:14:34.748909 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-hostproc" (OuterVolumeSpecName: "hostproc") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.749182 kubelet[1440]: I1002 19:14:34.748920 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.749182 kubelet[1440]: I1002 19:14:34.748927 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.749300 kubelet[1440]: I1002 19:14:34.748940 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:34.750535 kubelet[1440]: I1002 19:14:34.750464 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/079c025c-7182-4c3b-9417-081eb20ee218-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:14:34.752663 kubelet[1440]: I1002 19:14:34.751287 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-kube-api-access-tfv2t" (OuterVolumeSpecName: "kube-api-access-tfv2t") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "kube-api-access-tfv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:14:34.752663 kubelet[1440]: I1002 19:14:34.752176 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079c025c-7182-4c3b-9417-081eb20ee218-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:14:34.752663 kubelet[1440]: I1002 19:14:34.752458 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "079c025c-7182-4c3b-9417-081eb20ee218" (UID: "079c025c-7182-4c3b-9417-081eb20ee218"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:14:34.751809 systemd[1]: var-lib-kubelet-pods-079c025c\x2d7182\x2d4c3b\x2d9417\x2d081eb20ee218-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtfv2t.mount: Deactivated successfully. Oct 2 19:14:34.752975 systemd[1]: var-lib-kubelet-pods-079c025c\x2d7182\x2d4c3b\x2d9417\x2d081eb20ee218-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:14:34.753065 systemd[1]: var-lib-kubelet-pods-079c025c\x2d7182\x2d4c3b\x2d9417\x2d081eb20ee218-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:14:34.848972 kubelet[1440]: I1002 19:14:34.848903 1440 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cni-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.848972 kubelet[1440]: I1002 19:14:34.848944 1440 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-xtables-lock\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.848972 kubelet[1440]: I1002 19:14:34.848957 1440 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-cgroup\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.848972 kubelet[1440]: I1002 19:14:34.848967 1440 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/079c025c-7182-4c3b-9417-081eb20ee218-cilium-config-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.848972 kubelet[1440]: I1002 19:14:34.848978 1440 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-kernel\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.848991 1440 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/079c025c-7182-4c3b-9417-081eb20ee218-clustermesh-secrets\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849002 1440 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-lib-modules\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849011 1440 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-hubble-tls\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849021 1440 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-host-proc-sys-net\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849035 1440 reconciler.go:399] "Volume detached for volume \"kube-api-access-tfv2t\" (UniqueName: \"kubernetes.io/projected/079c025c-7182-4c3b-9417-081eb20ee218-kube-api-access-tfv2t\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849045 1440 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-bpf-maps\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849055 1440 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-hostproc\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849064 1440 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-etc-cni-netd\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:34.849290 kubelet[1440]: I1002 19:14:34.849073 1440 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/079c025c-7182-4c3b-9417-081eb20ee218-cilium-run\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:35.042216 kubelet[1440]: E1002 19:14:35.042083 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:35.170523 kubelet[1440]: E1002 19:14:35.170474 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:35.197812 kubelet[1440]: E1002 19:14:35.197751 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:35.263799 systemd[1]: Removed slice kubepods-burstable-pod079c025c_7182_4c3b_9417_081eb20ee218.slice. Oct 2 19:14:35.289527 kubelet[1440]: I1002 19:14:35.289476 1440 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:14:35.289527 kubelet[1440]: E1002 19:14:35.289534 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: E1002 19:14:35.289544 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: E1002 19:14:35.289550 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: E1002 19:14:35.289556 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: I1002 19:14:35.289592 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: I1002 19:14:35.289600 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: I1002 19:14:35.289605 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: I1002 19:14:35.289610 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: E1002 19:14:35.289622 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: I1002 19:14:35.289653 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: I1002 19:14:35.289661 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.289719 kubelet[1440]: E1002 19:14:35.289673 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="079c025c-7182-4c3b-9417-081eb20ee218" containerName="mount-cgroup" Oct 2 19:14:35.295175 systemd[1]: Created slice kubepods-burstable-pod70627fb3_d893_4eee_af63_f9728d7a7ff1.slice. Oct 2 19:14:35.352349 kubelet[1440]: I1002 19:14:35.352316 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cni-path\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352521 kubelet[1440]: I1002 19:14:35.352510 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-lib-modules\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352665 kubelet[1440]: I1002 19:14:35.352654 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-config-path\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352760 kubelet[1440]: I1002 19:14:35.352750 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-etc-cni-netd\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352883 kubelet[1440]: I1002 19:14:35.352849 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-net\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352922 kubelet[1440]: I1002 19:14:35.352904 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddz78\" (UniqueName: \"kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-kube-api-access-ddz78\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352951 kubelet[1440]: I1002 19:14:35.352934 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-cgroup\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.352980 kubelet[1440]: I1002 19:14:35.352968 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-xtables-lock\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.353006 kubelet[1440]: I1002 19:14:35.352988 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-hubble-tls\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.353038 kubelet[1440]: I1002 19:14:35.353016 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70627fb3-d893-4eee-af63-f9728d7a7ff1-clustermesh-secrets\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.353064 kubelet[1440]: I1002 19:14:35.353052 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-kernel\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.353136 kubelet[1440]: I1002 19:14:35.353096 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-run\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.353175 kubelet[1440]: I1002 19:14:35.353148 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-bpf-maps\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.353175 kubelet[1440]: I1002 19:14:35.353169 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-hostproc\") pod \"cilium-9dtvf\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " pod="kube-system/cilium-9dtvf" Oct 2 19:14:35.576606 kubelet[1440]: I1002 19:14:35.576584 1440 scope.go:115] "RemoveContainer" containerID="230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7" Oct 2 19:14:35.578527 env[1138]: time="2023-10-02T19:14:35.578490350Z" level=info msg="RemoveContainer for \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\"" Oct 2 19:14:35.580869 env[1138]: time="2023-10-02T19:14:35.580832172Z" level=info msg="RemoveContainer for \"230024e2213e0049d234e5248addc3a48c7381e814234894272ada82973abcf7\" returns successfully" Oct 2 19:14:35.604479 kubelet[1440]: E1002 19:14:35.604448 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:35.605188 env[1138]: time="2023-10-02T19:14:35.605018994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9dtvf,Uid:70627fb3-d893-4eee-af63-f9728d7a7ff1,Namespace:kube-system,Attempt:0,}" Oct 2 19:14:35.617289 env[1138]: time="2023-10-02T19:14:35.617227425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:14:35.617397 env[1138]: time="2023-10-02T19:14:35.617270746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:14:35.617397 env[1138]: time="2023-10-02T19:14:35.617283986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:14:35.617690 env[1138]: time="2023-10-02T19:14:35.617646029Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7 pid=2072 runtime=io.containerd.runc.v2 Oct 2 19:14:35.628743 systemd[1]: Started cri-containerd-814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7.scope. Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662339 kernel: audit: type=1400 audit(1696274075.657:665): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662407 kernel: audit: type=1400 audit(1696274075.657:666): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662515 kernel: audit: type=1400 audit(1696274075.657:667): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.663962 kernel: audit: type=1400 audit(1696274075.657:668): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.665571 kernel: audit: type=1400 audit(1696274075.657:669): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.668932 kernel: audit: type=1400 audit(1696274075.657:670): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.668975 kernel: audit: type=1400 audit(1696274075.657:671): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.672229 kernel: audit: type=1400 audit(1696274075.657:672): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.658000 audit: BPF prog-id=78 op=LOAD Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2072 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:35.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346633646238353334373437333032393537613565323462333434 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2072 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:35.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346633646238353334373437333032393537613565323462333434 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.659000 audit: BPF prog-id=79 op=LOAD Oct 2 19:14:35.659000 audit[2082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2072 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:35.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346633646238353334373437333032393537613565323462333434 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.660000 audit: BPF prog-id=80 op=LOAD Oct 2 19:14:35.660000 audit[2082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2072 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:35.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346633646238353334373437333032393537613565323462333434 Oct 2 19:14:35.662000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:14:35.662000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { perfmon } for pid=2082 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit[2082]: AVC avc: denied { bpf } for pid=2082 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:35.662000 audit: BPF prog-id=81 op=LOAD Oct 2 19:14:35.662000 audit[2082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2072 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:35.662000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831346633646238353334373437333032393537613565323462333434 Oct 2 19:14:35.682122 env[1138]: time="2023-10-02T19:14:35.682085460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9dtvf,Uid:70627fb3-d893-4eee-af63-f9728d7a7ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\"" Oct 2 19:14:35.683060 kubelet[1440]: E1002 19:14:35.683042 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:35.685174 env[1138]: time="2023-10-02T19:14:35.684975726Z" level=info msg="CreateContainer within sandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:14:35.695464 env[1138]: time="2023-10-02T19:14:35.695424182Z" level=info msg="CreateContainer within sandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\"" Oct 2 19:14:35.696219 env[1138]: time="2023-10-02T19:14:35.696194149Z" level=info msg="StartContainer for \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\"" Oct 2 19:14:35.716405 systemd[1]: Started cri-containerd-8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9.scope. Oct 2 19:14:35.740381 systemd[1]: cri-containerd-8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9.scope: Deactivated successfully. Oct 2 19:14:35.751761 env[1138]: time="2023-10-02T19:14:35.751714138Z" level=info msg="shim disconnected" id=8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9 Oct 2 19:14:35.751987 env[1138]: time="2023-10-02T19:14:35.751967420Z" level=warning msg="cleaning up after shim disconnected" id=8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9 namespace=k8s.io Oct 2 19:14:35.752085 env[1138]: time="2023-10-02T19:14:35.752069821Z" level=info msg="cleaning up dead shim" Oct 2 19:14:35.763399 env[1138]: time="2023-10-02T19:14:35.763348725Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2128 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:35.763655 env[1138]: time="2023-10-02T19:14:35.763586527Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 19:14:35.764718 env[1138]: time="2023-10-02T19:14:35.764679977Z" level=error msg="Failed to pipe stderr of container \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\"" error="reading from a closed fifo" Oct 2 19:14:35.764828 env[1138]: time="2023-10-02T19:14:35.764704217Z" level=error msg="Failed to pipe stdout of container \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\"" error="reading from a closed fifo" Oct 2 19:14:35.767337 env[1138]: time="2023-10-02T19:14:35.767292361Z" level=error msg="StartContainer for \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:35.767635 kubelet[1440]: E1002 19:14:35.767599 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9" Oct 2 19:14:35.767725 kubelet[1440]: E1002 19:14:35.767705 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:35.767725 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:35.767725 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:14:35.767725 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ddz78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-9dtvf_kube-system(70627fb3-d893-4eee-af63-f9728d7a7ff1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:35.767868 kubelet[1440]: E1002 19:14:35.767741 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9dtvf" podUID=70627fb3-d893-4eee-af63-f9728d7a7ff1 Oct 2 19:14:36.198174 kubelet[1440]: E1002 19:14:36.198128 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:36.580510 env[1138]: time="2023-10-02T19:14:36.580426246Z" level=info msg="StopPodSandbox for \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\"" Oct 2 19:14:36.580671 env[1138]: time="2023-10-02T19:14:36.580527527Z" level=info msg="Container to stop \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:14:36.586440 systemd[1]: cri-containerd-814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7.scope: Deactivated successfully. Oct 2 19:14:36.585000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:14:36.592000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:14:36.611524 env[1138]: time="2023-10-02T19:14:36.611470886Z" level=info msg="shim disconnected" id=814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7 Oct 2 19:14:36.611524 env[1138]: time="2023-10-02T19:14:36.611520926Z" level=warning msg="cleaning up after shim disconnected" id=814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7 namespace=k8s.io Oct 2 19:14:36.611524 env[1138]: time="2023-10-02T19:14:36.611529726Z" level=info msg="cleaning up dead shim" Oct 2 19:14:36.620422 env[1138]: time="2023-10-02T19:14:36.620376606Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2159 runtime=io.containerd.runc.v2\n" Oct 2 19:14:36.620700 env[1138]: time="2023-10-02T19:14:36.620677409Z" level=info msg="TearDown network for sandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" successfully" Oct 2 19:14:36.620742 env[1138]: time="2023-10-02T19:14:36.620701649Z" level=info msg="StopPodSandbox for \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" returns successfully" Oct 2 19:14:36.662015 kubelet[1440]: I1002 19:14:36.661976 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-xtables-lock\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662015 kubelet[1440]: I1002 19:14:36.662025 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-hubble-tls\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662201 kubelet[1440]: I1002 19:14:36.662063 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-etc-cni-netd\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662201 kubelet[1440]: I1002 19:14:36.662085 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70627fb3-d893-4eee-af63-f9728d7a7ff1-clustermesh-secrets\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662201 kubelet[1440]: I1002 19:14:36.662081 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.662201 kubelet[1440]: I1002 19:14:36.662129 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.662201 kubelet[1440]: I1002 19:14:36.662174 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-bpf-maps\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662201 kubelet[1440]: I1002 19:14:36.662193 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-lib-modules\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662412 kubelet[1440]: I1002 19:14:36.662216 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-config-path\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662412 kubelet[1440]: I1002 19:14:36.662235 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.662412 kubelet[1440]: I1002 19:14:36.662251 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.662412 kubelet[1440]: I1002 19:14:36.662275 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-net\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662412 kubelet[1440]: I1002 19:14:36.662292 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-cgroup\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.662531 kubelet[1440]: I1002 19:14:36.662329 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.662531 kubelet[1440]: I1002 19:14:36.662414 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.662531 kubelet[1440]: W1002 19:14:36.662434 1440 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/70627fb3-d893-4eee-af63-f9728d7a7ff1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:14:36.662531 kubelet[1440]: I1002 19:14:36.662494 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.663840 kubelet[1440]: I1002 19:14:36.662308 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-kernel\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.663840 kubelet[1440]: I1002 19:14:36.663700 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-hostproc\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.663840 kubelet[1440]: I1002 19:14:36.663721 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cni-path\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.663840 kubelet[1440]: I1002 19:14:36.663775 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-hostproc" (OuterVolumeSpecName: "hostproc") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.663840 kubelet[1440]: I1002 19:14:36.663796 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cni-path" (OuterVolumeSpecName: "cni-path") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.663840 kubelet[1440]: I1002 19:14:36.663822 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-run\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663873 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddz78\" (UniqueName: \"kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-kube-api-access-ddz78\") pod \"70627fb3-d893-4eee-af63-f9728d7a7ff1\" (UID: \"70627fb3-d893-4eee-af63-f9728d7a7ff1\") " Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663908 1440 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-xtables-lock\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663920 1440 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-lib-modules\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663929 1440 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-etc-cni-netd\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663938 1440 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-bpf-maps\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663947 1440 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cni-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663956 1440 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-net\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664093 kubelet[1440]: I1002 19:14:36.663965 1440 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-cgroup\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664283 kubelet[1440]: I1002 19:14:36.663975 1440 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-host-proc-sys-kernel\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664283 kubelet[1440]: I1002 19:14:36.663984 1440 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-hostproc\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.664283 kubelet[1440]: I1002 19:14:36.664095 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:14:36.664283 kubelet[1440]: I1002 19:14:36.663828 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:14:36.665053 kubelet[1440]: I1002 19:14:36.665014 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70627fb3-d893-4eee-af63-f9728d7a7ff1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:14:36.665890 kubelet[1440]: I1002 19:14:36.665859 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:14:36.666678 kubelet[1440]: I1002 19:14:36.666653 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-kube-api-access-ddz78" (OuterVolumeSpecName: "kube-api-access-ddz78") pod "70627fb3-d893-4eee-af63-f9728d7a7ff1" (UID: "70627fb3-d893-4eee-af63-f9728d7a7ff1"). InnerVolumeSpecName "kube-api-access-ddz78". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:14:36.688729 systemd[1]: run-containerd-runc-k8s.io-8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9-runc.NTxqy3.mount: Deactivated successfully. Oct 2 19:14:36.688825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9-rootfs.mount: Deactivated successfully. Oct 2 19:14:36.688879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7-rootfs.mount: Deactivated successfully. Oct 2 19:14:36.688927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7-shm.mount: Deactivated successfully. Oct 2 19:14:36.688980 systemd[1]: var-lib-kubelet-pods-70627fb3\x2dd893\x2d4eee\x2daf63\x2df9728d7a7ff1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dddz78.mount: Deactivated successfully. Oct 2 19:14:36.689026 systemd[1]: var-lib-kubelet-pods-70627fb3\x2dd893\x2d4eee\x2daf63\x2df9728d7a7ff1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:14:36.689089 systemd[1]: var-lib-kubelet-pods-70627fb3\x2dd893\x2d4eee\x2daf63\x2df9728d7a7ff1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:14:36.765139 kubelet[1440]: I1002 19:14:36.765097 1440 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70627fb3-d893-4eee-af63-f9728d7a7ff1-clustermesh-secrets\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.765139 kubelet[1440]: I1002 19:14:36.765136 1440 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-config-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.765139 kubelet[1440]: I1002 19:14:36.765147 1440 reconciler.go:399] "Volume detached for volume \"kube-api-access-ddz78\" (UniqueName: \"kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-kube-api-access-ddz78\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.765295 kubelet[1440]: I1002 19:14:36.765157 1440 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70627fb3-d893-4eee-af63-f9728d7a7ff1-cilium-run\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:36.765295 kubelet[1440]: I1002 19:14:36.765233 1440 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70627fb3-d893-4eee-af63-f9728d7a7ff1-hubble-tls\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:14:37.199129 kubelet[1440]: E1002 19:14:37.199082 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:37.261156 kubelet[1440]: I1002 19:14:37.261116 1440 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=079c025c-7182-4c3b-9417-081eb20ee218 path="/var/lib/kubelet/pods/079c025c-7182-4c3b-9417-081eb20ee218/volumes" Oct 2 19:14:37.264648 systemd[1]: Removed slice kubepods-burstable-pod70627fb3_d893_4eee_af63_f9728d7a7ff1.slice. Oct 2 19:14:37.582992 kubelet[1440]: I1002 19:14:37.582966 1440 scope.go:115] "RemoveContainer" containerID="8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9" Oct 2 19:14:37.585911 env[1138]: time="2023-10-02T19:14:37.585866463Z" level=info msg="RemoveContainer for \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\"" Oct 2 19:14:37.589108 env[1138]: time="2023-10-02T19:14:37.589074892Z" level=info msg="RemoveContainer for \"8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9\" returns successfully" Oct 2 19:14:38.199918 kubelet[1440]: E1002 19:14:38.199883 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:38.856787 kubelet[1440]: W1002 19:14:38.856730 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70627fb3_d893_4eee_af63_f9728d7a7ff1.slice/cri-containerd-8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9.scope WatchSource:0}: container "8a5641d8ca198fbfa5b5e8b43ada7f3a6e7d184297f0f831678bc77e728b62b9" in namespace "k8s.io": not found Oct 2 19:14:39.200565 kubelet[1440]: E1002 19:14:39.200458 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:39.261403 kubelet[1440]: I1002 19:14:39.261370 1440 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=70627fb3-d893-4eee-af63-f9728d7a7ff1 path="/var/lib/kubelet/pods/70627fb3-d893-4eee-af63-f9728d7a7ff1/volumes" Oct 2 19:14:39.440800 kubelet[1440]: I1002 19:14:39.440739 1440 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:14:39.440800 kubelet[1440]: E1002 19:14:39.440805 1440 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="70627fb3-d893-4eee-af63-f9728d7a7ff1" containerName="mount-cgroup" Oct 2 19:14:39.440979 kubelet[1440]: I1002 19:14:39.440825 1440 memory_manager.go:345] "RemoveStaleState removing state" podUID="70627fb3-d893-4eee-af63-f9728d7a7ff1" containerName="mount-cgroup" Oct 2 19:14:39.440979 kubelet[1440]: I1002 19:14:39.440972 1440 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:14:39.446060 systemd[1]: Created slice kubepods-burstable-pod03bb2c10_64ac_49ca_aee0_21ba65fb0462.slice. Oct 2 19:14:39.454505 systemd[1]: Created slice kubepods-besteffort-pod21476d9a_edcc_4348_aa86_49dda98d4417.slice. Oct 2 19:14:39.477986 kubelet[1440]: I1002 19:14:39.477930 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-cgroup\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478152 kubelet[1440]: I1002 19:14:39.478012 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cni-path\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478152 kubelet[1440]: I1002 19:14:39.478071 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-lib-modules\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478152 kubelet[1440]: I1002 19:14:39.478123 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-clustermesh-secrets\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478152 kubelet[1440]: I1002 19:14:39.478152 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-ipsec-secrets\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478270 kubelet[1440]: I1002 19:14:39.478206 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-run\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478270 kubelet[1440]: I1002 19:14:39.478232 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hostproc\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478320 kubelet[1440]: I1002 19:14:39.478296 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-kernel\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478320 kubelet[1440]: I1002 19:14:39.478316 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hubble-tls\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478373 kubelet[1440]: I1002 19:14:39.478356 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9rkr\" (UniqueName: \"kubernetes.io/projected/21476d9a-edcc-4348-aa86-49dda98d4417-kube-api-access-m9rkr\") pod \"cilium-operator-69b677f97c-tqtxr\" (UID: \"21476d9a-edcc-4348-aa86-49dda98d4417\") " pod="kube-system/cilium-operator-69b677f97c-tqtxr" Oct 2 19:14:39.478405 kubelet[1440]: I1002 19:14:39.478379 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-bpf-maps\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478443 kubelet[1440]: I1002 19:14:39.478413 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-xtables-lock\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478501 kubelet[1440]: I1002 19:14:39.478472 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-config-path\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478566 kubelet[1440]: I1002 19:14:39.478523 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21476d9a-edcc-4348-aa86-49dda98d4417-cilium-config-path\") pod \"cilium-operator-69b677f97c-tqtxr\" (UID: \"21476d9a-edcc-4348-aa86-49dda98d4417\") " pod="kube-system/cilium-operator-69b677f97c-tqtxr" Oct 2 19:14:39.478599 kubelet[1440]: I1002 19:14:39.478581 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-etc-cni-netd\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478623 kubelet[1440]: I1002 19:14:39.478613 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-net\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.478690 kubelet[1440]: I1002 19:14:39.478664 1440 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2br\" (UniqueName: \"kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-kube-api-access-pv2br\") pod \"cilium-sdfhm\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " pod="kube-system/cilium-sdfhm" Oct 2 19:14:39.753587 kubelet[1440]: E1002 19:14:39.753447 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:39.754514 env[1138]: time="2023-10-02T19:14:39.754471256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdfhm,Uid:03bb2c10-64ac-49ca-aee0-21ba65fb0462,Namespace:kube-system,Attempt:0,}" Oct 2 19:14:39.756576 kubelet[1440]: E1002 19:14:39.756546 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:39.757131 env[1138]: time="2023-10-02T19:14:39.757089599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-tqtxr,Uid:21476d9a-edcc-4348-aa86-49dda98d4417,Namespace:kube-system,Attempt:0,}" Oct 2 19:14:39.771097 env[1138]: time="2023-10-02T19:14:39.770997118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:14:39.771097 env[1138]: time="2023-10-02T19:14:39.771059918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:14:39.771097 env[1138]: time="2023-10-02T19:14:39.771071558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:14:39.771548 env[1138]: time="2023-10-02T19:14:39.771504602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c pid=2195 runtime=io.containerd.runc.v2 Oct 2 19:14:39.771846 env[1138]: time="2023-10-02T19:14:39.771790445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:14:39.771846 env[1138]: time="2023-10-02T19:14:39.771827605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:14:39.772092 env[1138]: time="2023-10-02T19:14:39.772043527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:14:39.772358 env[1138]: time="2023-10-02T19:14:39.772294809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5 pid=2192 runtime=io.containerd.runc.v2 Oct 2 19:14:39.785227 systemd[1]: Started cri-containerd-25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5.scope. Oct 2 19:14:39.791906 systemd[1]: Started cri-containerd-6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c.scope. Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.813927 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 19:14:39.813995 kernel: audit: type=1400 audit(1696274079.810:685): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814022 kernel: audit: type=1400 audit(1696274079.810:686): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.815711 kernel: audit: type=1400 audit(1696274079.810:687): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.817470 kernel: audit: type=1400 audit(1696274079.810:688): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.819236 kernel: audit: type=1400 audit(1696274079.810:689): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.821019 kernel: audit: type=1400 audit(1696274079.810:690): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.824481 kernel: audit: type=1400 audit(1696274079.810:691): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.824549 kernel: audit: type=1400 audit(1696274079.810:692): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.828019 kernel: audit: type=1400 audit(1696274079.810:693): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829768 kernel: audit: type=1400 audit(1696274079.810:694): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.810000 audit: BPF prog-id=82 op=LOAD Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2192 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663665343266366239656239393032366635643732306235663236 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2192 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663665343266366239656239393032366635643732306235663236 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.814000 audit: BPF prog-id=83 op=LOAD Oct 2 19:14:39.814000 audit[2215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2192 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663665343266366239656239393032366635643732306235663236 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.816000 audit: BPF prog-id=84 op=LOAD Oct 2 19:14:39.816000 audit[2215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2192 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663665343266366239656239393032366635643732306235663236 Oct 2 19:14:39.818000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:14:39.818000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { perfmon } for pid=2215 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit[2215]: AVC avc: denied { bpf } for pid=2215 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.818000 audit: BPF prog-id=85 op=LOAD Oct 2 19:14:39.818000 audit[2215]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2192 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235663665343266366239656239393032366635643732306235663236 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.829000 audit: BPF prog-id=86 op=LOAD Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=2195 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662376537343263646438396366383637636535383136646436336636 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=2195 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662376537343263646438396366383637636535383136646436336636 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit: BPF prog-id=87 op=LOAD Oct 2 19:14:39.830000 audit[2214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=2195 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662376537343263646438396366383637636535383136646436336636 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit: BPF prog-id=88 op=LOAD Oct 2 19:14:39.830000 audit[2214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=2195 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662376537343263646438396366383637636535383136646436336636 Oct 2 19:14:39.830000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:14:39.830000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { perfmon } for pid=2214 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit[2214]: AVC avc: denied { bpf } for pid=2214 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:39.830000 audit: BPF prog-id=89 op=LOAD Oct 2 19:14:39.830000 audit[2214]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=2195 pid=2214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:39.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662376537343263646438396366383637636535383136646436336636 Oct 2 19:14:39.851231 env[1138]: time="2023-10-02T19:14:39.851184206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sdfhm,Uid:03bb2c10-64ac-49ca-aee0-21ba65fb0462,Namespace:kube-system,Attempt:0,} returns sandbox id \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\"" Oct 2 19:14:39.852088 kubelet[1440]: E1002 19:14:39.852065 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:39.854288 env[1138]: time="2023-10-02T19:14:39.854251392Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:14:39.854698 env[1138]: time="2023-10-02T19:14:39.854669275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-tqtxr,Uid:21476d9a-edcc-4348-aa86-49dda98d4417,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c\"" Oct 2 19:14:39.855190 kubelet[1440]: E1002 19:14:39.855173 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:39.855849 env[1138]: time="2023-10-02T19:14:39.855821525Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:14:39.865788 env[1138]: time="2023-10-02T19:14:39.865735210Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\"" Oct 2 19:14:39.866308 env[1138]: time="2023-10-02T19:14:39.866279735Z" level=info msg="StartContainer for \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\"" Oct 2 19:14:39.882131 systemd[1]: Started cri-containerd-f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539.scope. Oct 2 19:14:39.901162 systemd[1]: cri-containerd-f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539.scope: Deactivated successfully. Oct 2 19:14:39.921222 env[1138]: time="2023-10-02T19:14:39.921168966Z" level=info msg="shim disconnected" id=f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539 Oct 2 19:14:39.921222 env[1138]: time="2023-10-02T19:14:39.921217286Z" level=warning msg="cleaning up after shim disconnected" id=f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539 namespace=k8s.io Oct 2 19:14:39.921222 env[1138]: time="2023-10-02T19:14:39.921226686Z" level=info msg="cleaning up dead shim" Oct 2 19:14:39.929914 env[1138]: time="2023-10-02T19:14:39.929861400Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2287 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:39.930193 env[1138]: time="2023-10-02T19:14:39.930124683Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:14:39.930351 env[1138]: time="2023-10-02T19:14:39.930305804Z" level=error msg="Failed to pipe stdout of container \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\"" error="reading from a closed fifo" Oct 2 19:14:39.930473 env[1138]: time="2023-10-02T19:14:39.930439805Z" level=error msg="Failed to pipe stderr of container \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\"" error="reading from a closed fifo" Oct 2 19:14:39.931579 env[1138]: time="2023-10-02T19:14:39.931529175Z" level=error msg="StartContainer for \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:39.931802 kubelet[1440]: E1002 19:14:39.931775 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539" Oct 2 19:14:39.931928 kubelet[1440]: E1002 19:14:39.931893 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:39.931928 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:39.931928 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:14:39.931928 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pv2br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:39.932100 kubelet[1440]: E1002 19:14:39.931936 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:14:40.171370 kubelet[1440]: E1002 19:14:40.171325 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:40.200799 kubelet[1440]: E1002 19:14:40.200764 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:40.589951 kubelet[1440]: E1002 19:14:40.589786 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:40.591661 env[1138]: time="2023-10-02T19:14:40.591603476Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:14:40.602447 env[1138]: time="2023-10-02T19:14:40.602392567Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\"" Oct 2 19:14:40.603517 env[1138]: time="2023-10-02T19:14:40.603472736Z" level=info msg="StartContainer for \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\"" Oct 2 19:14:40.627997 systemd[1]: Started cri-containerd-1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a.scope. Oct 2 19:14:40.646913 systemd[1]: cri-containerd-1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a.scope: Deactivated successfully. Oct 2 19:14:40.684735 env[1138]: time="2023-10-02T19:14:40.684671861Z" level=info msg="shim disconnected" id=1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a Oct 2 19:14:40.684735 env[1138]: time="2023-10-02T19:14:40.684729301Z" level=warning msg="cleaning up after shim disconnected" id=1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a namespace=k8s.io Oct 2 19:14:40.684735 env[1138]: time="2023-10-02T19:14:40.684740781Z" level=info msg="cleaning up dead shim" Oct 2 19:14:40.693179 env[1138]: time="2023-10-02T19:14:40.693123772Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2327 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:40.693413 env[1138]: time="2023-10-02T19:14:40.693355534Z" level=error msg="copy shim log" error="read /proc/self/fd/47: file already closed" Oct 2 19:14:40.693569 env[1138]: time="2023-10-02T19:14:40.693524695Z" level=error msg="Failed to pipe stdout of container \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\"" error="reading from a closed fifo" Oct 2 19:14:40.693685 env[1138]: time="2023-10-02T19:14:40.693658657Z" level=error msg="Failed to pipe stderr of container \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\"" error="reading from a closed fifo" Oct 2 19:14:40.696003 env[1138]: time="2023-10-02T19:14:40.695950876Z" level=error msg="StartContainer for \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:40.696274 kubelet[1440]: E1002 19:14:40.696249 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a" Oct 2 19:14:40.696368 kubelet[1440]: E1002 19:14:40.696351 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:40.696368 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:40.696368 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:14:40.696368 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pv2br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:40.696492 kubelet[1440]: E1002 19:14:40.696386 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:14:41.172159 env[1138]: time="2023-10-02T19:14:41.172106029Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:41.172868 env[1138]: time="2023-10-02T19:14:41.172830715Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:14:41.173683 env[1138]: time="2023-10-02T19:14:41.173655242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:41.174611 env[1138]: time="2023-10-02T19:14:41.174581650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:14:41.176686 env[1138]: time="2023-10-02T19:14:41.176658987Z" level=info msg="CreateContainer within sandbox \"6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:14:41.190028 env[1138]: time="2023-10-02T19:14:41.189986657Z" level=info msg="CreateContainer within sandbox \"6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\"" Oct 2 19:14:41.190767 env[1138]: time="2023-10-02T19:14:41.190737544Z" level=info msg="StartContainer for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\"" Oct 2 19:14:41.201657 kubelet[1440]: E1002 19:14:41.201617 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:41.209244 systemd[1]: Started cri-containerd-05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2.scope. Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.231000 audit: BPF prog-id=90 op=LOAD Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000147b38 a2=10 a3=0 items=0 ppid=2195 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:41.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333332613036653130373537663633303863336132646339343466 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001475a0 a2=3c a3=0 items=0 ppid=2195 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:41.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333332613036653130373537663633303863336132646339343466 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit: BPF prog-id=91 op=LOAD Oct 2 19:14:41.232000 audit[2348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001478e0 a2=78 a3=0 items=0 ppid=2195 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:41.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333332613036653130373537663633303863336132646339343466 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit: BPF prog-id=92 op=LOAD Oct 2 19:14:41.232000 audit[2348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000147670 a2=78 a3=0 items=0 ppid=2195 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:41.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333332613036653130373537663633303863336132646339343466 Oct 2 19:14:41.232000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:14:41.232000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { perfmon } for pid=2348 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit[2348]: AVC avc: denied { bpf } for pid=2348 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:14:41.232000 audit: BPF prog-id=93 op=LOAD Oct 2 19:14:41.232000 audit[2348]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000147b40 a2=78 a3=0 items=0 ppid=2195 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:14:41.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035333332613036653130373537663633303863336132646339343466 Oct 2 19:14:41.250471 env[1138]: time="2023-10-02T19:14:41.250421599Z" level=info msg="StartContainer for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" returns successfully" Oct 2 19:14:41.315000 audit[2359]: AVC avc: denied { map_create } for pid=2359 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c547,c1019 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c547,c1019 tclass=bpf permissive=0 Oct 2 19:14:41.315000 audit[2359]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=40005bb768 a2=48 a3=0 items=0 ppid=2195 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c547,c1019 key=(null) Oct 2 19:14:41.315000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:14:41.584524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a-rootfs.mount: Deactivated successfully. Oct 2 19:14:41.593520 kubelet[1440]: I1002 19:14:41.593499 1440 scope.go:115] "RemoveContainer" containerID="f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539" Oct 2 19:14:41.593772 kubelet[1440]: I1002 19:14:41.593748 1440 scope.go:115] "RemoveContainer" containerID="f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539" Oct 2 19:14:41.594758 env[1138]: time="2023-10-02T19:14:41.594717415Z" level=info msg="RemoveContainer for \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\"" Oct 2 19:14:41.595720 kubelet[1440]: E1002 19:14:41.595703 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:41.596117 env[1138]: time="2023-10-02T19:14:41.596085187Z" level=info msg="RemoveContainer for \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\"" Oct 2 19:14:41.596214 env[1138]: time="2023-10-02T19:14:41.596173668Z" level=error msg="RemoveContainer for \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\" failed" error="rpc error: code = NotFound desc = get container info: container \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\" in namespace \"k8s.io\": not found" Oct 2 19:14:41.596335 kubelet[1440]: E1002 19:14:41.596322 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\" in namespace \"k8s.io\": not found" containerID="f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539" Oct 2 19:14:41.596421 kubelet[1440]: E1002 19:14:41.596410 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539" in namespace "k8s.io": not found; Skipping pod "cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)" Oct 2 19:14:41.596527 kubelet[1440]: E1002 19:14:41.596517 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:41.596792 kubelet[1440]: E1002 19:14:41.596776 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:14:41.597086 env[1138]: time="2023-10-02T19:14:41.597029235Z" level=info msg="RemoveContainer for \"f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539\" returns successfully" Oct 2 19:14:42.201938 kubelet[1440]: E1002 19:14:42.201903 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:42.598016 kubelet[1440]: E1002 19:14:42.597981 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:42.598192 kubelet[1440]: E1002 19:14:42.598178 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:42.598296 kubelet[1440]: E1002 19:14:42.598247 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:14:43.037679 kubelet[1440]: W1002 19:14:43.037564 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03bb2c10_64ac_49ca_aee0_21ba65fb0462.slice/cri-containerd-f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539.scope WatchSource:0}: container "f38cd9374e83fc539de5e60631792eaa987d66e1d6687d3c6197ccb55b889539" in namespace "k8s.io": not found Oct 2 19:14:43.202556 kubelet[1440]: E1002 19:14:43.202527 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:44.203688 kubelet[1440]: E1002 19:14:44.203648 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:45.172177 kubelet[1440]: E1002 19:14:45.172151 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:45.204521 kubelet[1440]: E1002 19:14:45.204490 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:46.144949 kubelet[1440]: W1002 19:14:46.144903 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03bb2c10_64ac_49ca_aee0_21ba65fb0462.slice/cri-containerd-1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a.scope WatchSource:0}: task 1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a not found: not found Oct 2 19:14:46.205241 kubelet[1440]: E1002 19:14:46.205185 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:47.205391 kubelet[1440]: E1002 19:14:47.205346 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:48.206142 kubelet[1440]: E1002 19:14:48.206060 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:49.207222 kubelet[1440]: E1002 19:14:49.207148 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:50.174536 kubelet[1440]: E1002 19:14:50.174497 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:50.207808 kubelet[1440]: E1002 19:14:50.207747 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:51.210147 kubelet[1440]: E1002 19:14:51.208737 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:52.209114 kubelet[1440]: E1002 19:14:52.209049 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:53.209430 kubelet[1440]: E1002 19:14:53.209396 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:54.210350 kubelet[1440]: E1002 19:14:54.210308 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:55.041881 kubelet[1440]: E1002 19:14:55.041838 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:55.175517 kubelet[1440]: E1002 19:14:55.175488 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:14:55.210765 kubelet[1440]: E1002 19:14:55.210721 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:56.211048 kubelet[1440]: E1002 19:14:56.210994 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:56.258770 kubelet[1440]: E1002 19:14:56.258738 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:56.260809 env[1138]: time="2023-10-02T19:14:56.260762826Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:14:56.271367 env[1138]: time="2023-10-02T19:14:56.271311294Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\"" Oct 2 19:14:56.271806 env[1138]: time="2023-10-02T19:14:56.271772737Z" level=info msg="StartContainer for \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\"" Oct 2 19:14:56.292838 systemd[1]: Started cri-containerd-6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695.scope. Oct 2 19:14:56.312038 systemd[1]: cri-containerd-6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695.scope: Deactivated successfully. Oct 2 19:14:56.398486 env[1138]: time="2023-10-02T19:14:56.398436875Z" level=info msg="shim disconnected" id=6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695 Oct 2 19:14:56.398755 env[1138]: time="2023-10-02T19:14:56.398735877Z" level=warning msg="cleaning up after shim disconnected" id=6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695 namespace=k8s.io Oct 2 19:14:56.398819 env[1138]: time="2023-10-02T19:14:56.398806157Z" level=info msg="cleaning up dead shim" Oct 2 19:14:56.406646 env[1138]: time="2023-10-02T19:14:56.406590167Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:14:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2405 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:14:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:14:56.407074 env[1138]: time="2023-10-02T19:14:56.407020890Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:14:56.408932 env[1138]: time="2023-10-02T19:14:56.408888702Z" level=error msg="Failed to pipe stdout of container \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\"" error="reading from a closed fifo" Oct 2 19:14:56.409033 env[1138]: time="2023-10-02T19:14:56.408909902Z" level=error msg="Failed to pipe stderr of container \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\"" error="reading from a closed fifo" Oct 2 19:14:56.410048 env[1138]: time="2023-10-02T19:14:56.410010750Z" level=error msg="StartContainer for \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:14:56.410445 kubelet[1440]: E1002 19:14:56.410256 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695" Oct 2 19:14:56.410445 kubelet[1440]: E1002 19:14:56.410378 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:14:56.410445 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:14:56.410445 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:14:56.410620 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pv2br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:14:56.410733 kubelet[1440]: E1002 19:14:56.410421 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:14:56.622983 kubelet[1440]: I1002 19:14:56.622651 1440 scope.go:115] "RemoveContainer" containerID="1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a" Oct 2 19:14:56.622983 kubelet[1440]: I1002 19:14:56.622953 1440 scope.go:115] "RemoveContainer" containerID="1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a" Oct 2 19:14:56.623651 env[1138]: time="2023-10-02T19:14:56.623606809Z" level=info msg="RemoveContainer for \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\"" Oct 2 19:14:56.624093 env[1138]: time="2023-10-02T19:14:56.623657249Z" level=info msg="RemoveContainer for \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\"" Oct 2 19:14:56.624380 env[1138]: time="2023-10-02T19:14:56.624346813Z" level=error msg="RemoveContainer for \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\" failed" error="failed to set removing state for container \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\": container is already in removing state" Oct 2 19:14:56.624846 kubelet[1440]: E1002 19:14:56.624666 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\": container is already in removing state" containerID="1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a" Oct 2 19:14:56.624846 kubelet[1440]: E1002 19:14:56.624697 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a": container is already in removing state; Skipping pod "cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)" Oct 2 19:14:56.624846 kubelet[1440]: E1002 19:14:56.624747 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:14:56.625061 kubelet[1440]: E1002 19:14:56.624950 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:14:56.627925 env[1138]: time="2023-10-02T19:14:56.627838556Z" level=info msg="RemoveContainer for \"1ee84cf45e9a88a41448a629cefc7baf6aa5bef318d15d772a166717e7edca6a\" returns successfully" Oct 2 19:14:57.211312 kubelet[1440]: E1002 19:14:57.211252 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:57.268999 systemd[1]: run-containerd-runc-k8s.io-6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695-runc.yPtOLU.mount: Deactivated successfully. Oct 2 19:14:57.269100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695-rootfs.mount: Deactivated successfully. Oct 2 19:14:57.975957 update_engine[1130]: I1002 19:14:57.975870 1130 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:14:57.975957 update_engine[1130]: I1002 19:14:57.975921 1130 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:14:57.976455 update_engine[1130]: I1002 19:14:57.976430 1130 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:14:57.982876 update_engine[1130]: I1002 19:14:57.982841 1130 omaha_request_params.cc:62] Current group set to lts Oct 2 19:14:57.983006 update_engine[1130]: I1002 19:14:57.982986 1130 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:14:57.983006 update_engine[1130]: I1002 19:14:57.982996 1130 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:14:57.983059 update_engine[1130]: I1002 19:14:57.983012 1130 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:14:57.983059 update_engine[1130]: I1002 19:14:57.983036 1130 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:14:57.983429 update_engine[1130]: I1002 19:14:57.983407 1130 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:14:57.983429 update_engine[1130]: I1002 19:14:57.983418 1130 omaha_request_action.cc:269] Request: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: Oct 2 19:14:57.983429 update_engine[1130]: I1002 19:14:57.983424 1130 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:14:57.983746 locksmithd[1170]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:14:57.986698 update_engine[1130]: I1002 19:14:57.986670 1130 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:14:57.986861 update_engine[1130]: I1002 19:14:57.986840 1130 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:14:58.211461 kubelet[1440]: E1002 19:14:58.211391 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:59.138760 update_engine[1130]: I1002 19:14:59.138708 1130 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:14:59.139060 update_engine[1130]: I1002 19:14:59.138964 1130 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:14:59.139164 update_engine[1130]: I1002 19:14:59.139133 1130 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:14:59.212014 kubelet[1440]: E1002 19:14:59.211963 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:14:59.448476 update_engine[1130]: I1002 19:14:59.448081 1130 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:14:59.449662 update_engine[1130]: I1002 19:14:59.449615 1130 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:14:59.449662 update_engine[1130]: I1002 19:14:59.449657 1130 omaha_request_action.cc:619] Omaha request response: Oct 2 19:14:59.449662 update_engine[1130]: Oct 2 19:14:59.455666 update_engine[1130]: I1002 19:14:59.455624 1130 omaha_request_action.cc:409] No update. Oct 2 19:14:59.455666 update_engine[1130]: I1002 19:14:59.455658 1130 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:14:59.455666 update_engine[1130]: I1002 19:14:59.455663 1130 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:14:59.455666 update_engine[1130]: I1002 19:14:59.455667 1130 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:14:59.455666 update_engine[1130]: I1002 19:14:59.455670 1130 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:14:59.455666 update_engine[1130]: I1002 19:14:59.455673 1130 update_attempter.cc:302] Processing Done. Oct 2 19:14:59.455850 update_engine[1130]: I1002 19:14:59.455686 1130 update_attempter.cc:338] No update. Oct 2 19:14:59.455850 update_engine[1130]: I1002 19:14:59.455696 1130 update_check_scheduler.cc:74] Next update check in 45m1s Oct 2 19:14:59.456168 locksmithd[1170]: LastCheckedTime=1696274099 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:14:59.503571 kubelet[1440]: W1002 19:14:59.503528 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03bb2c10_64ac_49ca_aee0_21ba65fb0462.slice/cri-containerd-6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695.scope WatchSource:0}: task 6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695 not found: not found Oct 2 19:15:00.176792 kubelet[1440]: E1002 19:15:00.176747 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:00.212154 kubelet[1440]: E1002 19:15:00.212093 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:01.213054 kubelet[1440]: E1002 19:15:01.212910 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:02.213798 kubelet[1440]: E1002 19:15:02.213742 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:03.214289 kubelet[1440]: E1002 19:15:03.214243 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:04.215190 kubelet[1440]: E1002 19:15:04.215109 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:05.177350 kubelet[1440]: E1002 19:15:05.177313 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:05.217177 kubelet[1440]: E1002 19:15:05.215538 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:06.215955 kubelet[1440]: E1002 19:15:06.215917 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:07.216462 kubelet[1440]: E1002 19:15:07.216389 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:08.216863 kubelet[1440]: E1002 19:15:08.216794 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:08.259042 kubelet[1440]: E1002 19:15:08.258740 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:15:08.259042 kubelet[1440]: E1002 19:15:08.258948 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:15:09.217614 kubelet[1440]: E1002 19:15:09.217569 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:10.178312 kubelet[1440]: E1002 19:15:10.178239 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:10.218582 kubelet[1440]: E1002 19:15:10.218539 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:11.219706 kubelet[1440]: E1002 19:15:11.219648 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:12.219803 kubelet[1440]: E1002 19:15:12.219745 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:13.220389 kubelet[1440]: E1002 19:15:13.220319 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:14.220976 kubelet[1440]: E1002 19:15:14.220918 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:15.041842 kubelet[1440]: E1002 19:15:15.041782 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:15.056095 env[1138]: time="2023-10-02T19:15:15.055986334Z" level=info msg="StopPodSandbox for \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\"" Oct 2 19:15:15.056095 env[1138]: time="2023-10-02T19:15:15.056067974Z" level=info msg="TearDown network for sandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" successfully" Oct 2 19:15:15.056095 env[1138]: time="2023-10-02T19:15:15.056098734Z" level=info msg="StopPodSandbox for \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" returns successfully" Oct 2 19:15:15.056476 env[1138]: time="2023-10-02T19:15:15.056410216Z" level=info msg="RemovePodSandbox for \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\"" Oct 2 19:15:15.056476 env[1138]: time="2023-10-02T19:15:15.056435736Z" level=info msg="Forcibly stopping sandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\"" Oct 2 19:15:15.056527 env[1138]: time="2023-10-02T19:15:15.056491536Z" level=info msg="TearDown network for sandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" successfully" Oct 2 19:15:15.066840 env[1138]: time="2023-10-02T19:15:15.061705520Z" level=info msg="RemovePodSandbox \"814f3db8534747302957a5e24b344f9dbb77884e358981888947ca83253501d7\" returns successfully" Oct 2 19:15:15.067255 env[1138]: time="2023-10-02T19:15:15.067221746Z" level=info msg="StopPodSandbox for \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\"" Oct 2 19:15:15.067355 env[1138]: time="2023-10-02T19:15:15.067314347Z" level=info msg="TearDown network for sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" successfully" Oct 2 19:15:15.067355 env[1138]: time="2023-10-02T19:15:15.067349867Z" level=info msg="StopPodSandbox for \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" returns successfully" Oct 2 19:15:15.068618 env[1138]: time="2023-10-02T19:15:15.067589748Z" level=info msg="RemovePodSandbox for \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\"" Oct 2 19:15:15.068618 env[1138]: time="2023-10-02T19:15:15.067617948Z" level=info msg="Forcibly stopping sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\"" Oct 2 19:15:15.068618 env[1138]: time="2023-10-02T19:15:15.067723468Z" level=info msg="TearDown network for sandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" successfully" Oct 2 19:15:15.071777 env[1138]: time="2023-10-02T19:15:15.070097560Z" level=info msg="RemovePodSandbox \"1345a4f9873144fb6491307fad0e28f0f7738b1cb763d1e1aec9ff89300916c3\" returns successfully" Oct 2 19:15:15.178941 kubelet[1440]: E1002 19:15:15.178846 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:15.221606 kubelet[1440]: E1002 19:15:15.221559 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:16.222314 kubelet[1440]: E1002 19:15:16.222265 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:17.223343 kubelet[1440]: E1002 19:15:17.223299 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:18.224323 kubelet[1440]: E1002 19:15:18.224289 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:19.225165 kubelet[1440]: E1002 19:15:19.225109 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:20.180024 kubelet[1440]: E1002 19:15:20.179995 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:20.225442 kubelet[1440]: E1002 19:15:20.225406 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:20.259112 kubelet[1440]: E1002 19:15:20.259086 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:15:20.261806 env[1138]: time="2023-10-02T19:15:20.261487835Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:15:20.278834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199419380.mount: Deactivated successfully. Oct 2 19:15:20.282777 env[1138]: time="2023-10-02T19:15:20.282728806Z" level=info msg="CreateContainer within sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\"" Oct 2 19:15:20.283278 env[1138]: time="2023-10-02T19:15:20.283248048Z" level=info msg="StartContainer for \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\"" Oct 2 19:15:20.302355 systemd[1]: run-containerd-runc-k8s.io-c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765-runc.VwZhtY.mount: Deactivated successfully. Oct 2 19:15:20.303688 systemd[1]: Started cri-containerd-c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765.scope. Oct 2 19:15:20.321876 systemd[1]: cri-containerd-c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765.scope: Deactivated successfully. Oct 2 19:15:20.332200 env[1138]: time="2023-10-02T19:15:20.332130137Z" level=info msg="shim disconnected" id=c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765 Oct 2 19:15:20.332200 env[1138]: time="2023-10-02T19:15:20.332186177Z" level=warning msg="cleaning up after shim disconnected" id=c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765 namespace=k8s.io Oct 2 19:15:20.332200 env[1138]: time="2023-10-02T19:15:20.332198177Z" level=info msg="cleaning up dead shim" Oct 2 19:15:20.343261 env[1138]: time="2023-10-02T19:15:20.343164904Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2445 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:15:20Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:15:20.343687 env[1138]: time="2023-10-02T19:15:20.343591026Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 19:15:20.343846 env[1138]: time="2023-10-02T19:15:20.343797787Z" level=error msg="Failed to pipe stdout of container \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\"" error="reading from a closed fifo" Oct 2 19:15:20.343904 env[1138]: time="2023-10-02T19:15:20.343868467Z" level=error msg="Failed to pipe stderr of container \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\"" error="reading from a closed fifo" Oct 2 19:15:20.345534 env[1138]: time="2023-10-02T19:15:20.345487114Z" level=error msg="StartContainer for \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:15:20.345738 kubelet[1440]: E1002 19:15:20.345710 1440 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765" Oct 2 19:15:20.345841 kubelet[1440]: E1002 19:15:20.345827 1440 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:15:20.345841 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:15:20.345841 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:15:20.345841 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pv2br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:15:20.345963 kubelet[1440]: E1002 19:15:20.345872 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:15:20.662359 kubelet[1440]: I1002 19:15:20.662055 1440 scope.go:115] "RemoveContainer" containerID="6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695" Oct 2 19:15:20.662512 kubelet[1440]: I1002 19:15:20.662445 1440 scope.go:115] "RemoveContainer" containerID="6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695" Oct 2 19:15:20.663879 env[1138]: time="2023-10-02T19:15:20.663547316Z" level=info msg="RemoveContainer for \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\"" Oct 2 19:15:20.663879 env[1138]: time="2023-10-02T19:15:20.663683597Z" level=info msg="RemoveContainer for \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\"" Oct 2 19:15:20.663879 env[1138]: time="2023-10-02T19:15:20.663778157Z" level=error msg="RemoveContainer for \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\" failed" error="failed to set removing state for container \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\": container is already in removing state" Oct 2 19:15:20.664051 kubelet[1440]: E1002 19:15:20.663903 1440 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\": container is already in removing state" containerID="6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695" Oct 2 19:15:20.664051 kubelet[1440]: E1002 19:15:20.663926 1440 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695": container is already in removing state; Skipping pod "cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)" Oct 2 19:15:20.664051 kubelet[1440]: E1002 19:15:20.663991 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:15:20.664253 kubelet[1440]: E1002 19:15:20.664219 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:15:20.667472 env[1138]: time="2023-10-02T19:15:20.666620329Z" level=info msg="RemoveContainer for \"6b4baa99a77cd37313c11ce36965a7d5a3768f4034beb31112200271fe045695\" returns successfully" Oct 2 19:15:21.226547 kubelet[1440]: E1002 19:15:21.226507 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:21.277075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765-rootfs.mount: Deactivated successfully. Oct 2 19:15:22.226731 kubelet[1440]: E1002 19:15:22.226671 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.227329 kubelet[1440]: E1002 19:15:23.227287 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:23.436558 kubelet[1440]: W1002 19:15:23.436511 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03bb2c10_64ac_49ca_aee0_21ba65fb0462.slice/cri-containerd-c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765.scope WatchSource:0}: task c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765 not found: not found Oct 2 19:15:24.227695 kubelet[1440]: E1002 19:15:24.227657 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:25.180577 kubelet[1440]: E1002 19:15:25.180553 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:25.228808 kubelet[1440]: E1002 19:15:25.228773 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:26.229410 kubelet[1440]: E1002 19:15:26.229378 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:27.230755 kubelet[1440]: E1002 19:15:27.230698 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:28.231882 kubelet[1440]: E1002 19:15:28.231821 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:29.232706 kubelet[1440]: E1002 19:15:29.232654 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:30.182043 kubelet[1440]: E1002 19:15:30.182016 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:30.233282 kubelet[1440]: E1002 19:15:30.233233 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:31.234079 kubelet[1440]: E1002 19:15:31.234040 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:32.236055 kubelet[1440]: E1002 19:15:32.236005 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:33.236987 kubelet[1440]: E1002 19:15:33.236941 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:34.237365 kubelet[1440]: E1002 19:15:34.237293 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:35.042249 kubelet[1440]: E1002 19:15:35.042215 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:35.183479 kubelet[1440]: E1002 19:15:35.183436 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:35.238145 kubelet[1440]: E1002 19:15:35.238106 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.238761 kubelet[1440]: E1002 19:15:36.238728 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:36.258386 kubelet[1440]: E1002 19:15:36.258362 1440 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:15:36.258671 kubelet[1440]: E1002 19:15:36.258624 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-sdfhm_kube-system(03bb2c10-64ac-49ca-aee0-21ba65fb0462)\"" pod="kube-system/cilium-sdfhm" podUID=03bb2c10-64ac-49ca-aee0-21ba65fb0462 Oct 2 19:15:37.239801 kubelet[1440]: E1002 19:15:37.239732 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:38.240501 kubelet[1440]: E1002 19:15:38.240439 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:39.241079 kubelet[1440]: E1002 19:15:39.241033 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:40.184832 kubelet[1440]: E1002 19:15:40.184806 1440 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:15:40.241180 kubelet[1440]: E1002 19:15:40.241140 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:40.743185 env[1138]: time="2023-10-02T19:15:40.743126829Z" level=info msg="StopPodSandbox for \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\"" Oct 2 19:15:40.744369 env[1138]: time="2023-10-02T19:15:40.743205189Z" level=info msg="Container to stop \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:15:40.744401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5-shm.mount: Deactivated successfully. Oct 2 19:15:40.748757 env[1138]: time="2023-10-02T19:15:40.748723166Z" level=info msg="StopContainer for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" with timeout 30 (s)" Oct 2 19:15:40.749204 env[1138]: time="2023-10-02T19:15:40.749176647Z" level=info msg="Stop container \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" with signal terminated" Oct 2 19:15:40.751081 systemd[1]: cri-containerd-25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5.scope: Deactivated successfully. Oct 2 19:15:40.750000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:15:40.751907 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 19:15:40.752025 kernel: audit: type=1334 audit(1696274140.750:740): prog-id=82 op=UNLOAD Oct 2 19:15:40.754000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:15:40.756705 kernel: audit: type=1334 audit(1696274140.754:741): prog-id=85 op=UNLOAD Oct 2 19:15:40.760512 systemd[1]: cri-containerd-05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2.scope: Deactivated successfully. Oct 2 19:15:40.760000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:15:40.762728 kernel: audit: type=1334 audit(1696274140.760:742): prog-id=90 op=UNLOAD Oct 2 19:15:40.764000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:15:40.765648 kernel: audit: type=1334 audit(1696274140.764:743): prog-id=93 op=UNLOAD Oct 2 19:15:40.781076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2-rootfs.mount: Deactivated successfully. Oct 2 19:15:40.785806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5-rootfs.mount: Deactivated successfully. Oct 2 19:15:40.787162 env[1138]: time="2023-10-02T19:15:40.787111561Z" level=info msg="shim disconnected" id=05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2 Oct 2 19:15:40.787536 env[1138]: time="2023-10-02T19:15:40.787509762Z" level=warning msg="cleaning up after shim disconnected" id=05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2 namespace=k8s.io Oct 2 19:15:40.787728 env[1138]: time="2023-10-02T19:15:40.787710163Z" level=info msg="cleaning up dead shim" Oct 2 19:15:40.787945 env[1138]: time="2023-10-02T19:15:40.787423162Z" level=info msg="shim disconnected" id=25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5 Oct 2 19:15:40.787945 env[1138]: time="2023-10-02T19:15:40.787943404Z" level=warning msg="cleaning up after shim disconnected" id=25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5 namespace=k8s.io Oct 2 19:15:40.788043 env[1138]: time="2023-10-02T19:15:40.787952244Z" level=info msg="cleaning up dead shim" Oct 2 19:15:40.796286 env[1138]: time="2023-10-02T19:15:40.796247909Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2497 runtime=io.containerd.runc.v2\n" Oct 2 19:15:40.796819 env[1138]: time="2023-10-02T19:15:40.796783510Z" level=info msg="TearDown network for sandbox \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" successfully" Oct 2 19:15:40.796915 env[1138]: time="2023-10-02T19:15:40.796895191Z" level=info msg="StopPodSandbox for \"25f6e42f6b9eb99026f5d720b5f26a11200c7d648c954d37bbfb6fdf6bb3bdb5\" returns successfully" Oct 2 19:15:40.798134 env[1138]: time="2023-10-02T19:15:40.798104634Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2496 runtime=io.containerd.runc.v2\n" Oct 2 19:15:40.800167 env[1138]: time="2023-10-02T19:15:40.800120720Z" level=info msg="StopContainer for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" returns successfully" Oct 2 19:15:40.803475 env[1138]: time="2023-10-02T19:15:40.803434290Z" level=info msg="StopPodSandbox for \"6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c\"" Oct 2 19:15:40.803657 env[1138]: time="2023-10-02T19:15:40.803606251Z" level=info msg="Container to stop \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:15:40.807078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c-shm.mount: Deactivated successfully. Oct 2 19:15:40.813350 systemd[1]: cri-containerd-6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c.scope: Deactivated successfully. Oct 2 19:15:40.812000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:15:40.814676 kernel: audit: type=1334 audit(1696274140.812:744): prog-id=86 op=UNLOAD Oct 2 19:15:40.817000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:15:40.818663 kernel: audit: type=1334 audit(1696274140.817:745): prog-id=89 op=UNLOAD Oct 2 19:15:40.834924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c-rootfs.mount: Deactivated successfully. Oct 2 19:15:40.838359 env[1138]: time="2023-10-02T19:15:40.838301755Z" level=info msg="shim disconnected" id=6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c Oct 2 19:15:40.838514 env[1138]: time="2023-10-02T19:15:40.838361035Z" level=warning msg="cleaning up after shim disconnected" id=6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c namespace=k8s.io Oct 2 19:15:40.838514 env[1138]: time="2023-10-02T19:15:40.838373195Z" level=info msg="cleaning up dead shim" Oct 2 19:15:40.846323 env[1138]: time="2023-10-02T19:15:40.846279379Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:15:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2539 runtime=io.containerd.runc.v2\n" Oct 2 19:15:40.846608 env[1138]: time="2023-10-02T19:15:40.846583660Z" level=info msg="TearDown network for sandbox \"6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c\" successfully" Oct 2 19:15:40.846685 env[1138]: time="2023-10-02T19:15:40.846609900Z" level=info msg="StopPodSandbox for \"6b7e742cdd89cf867ce5816dd63f6aea3de8500113a5c45bbd549e3524a1629c\" returns successfully" Oct 2 19:15:40.920229 kubelet[1440]: I1002 19:15:40.917729 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-clustermesh-secrets\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920229 kubelet[1440]: I1002 19:15:40.917775 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-ipsec-secrets\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920229 kubelet[1440]: I1002 19:15:40.917795 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hostproc\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920229 kubelet[1440]: I1002 19:15:40.917815 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hubble-tls\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920229 kubelet[1440]: I1002 19:15:40.917836 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9rkr\" (UniqueName: \"kubernetes.io/projected/21476d9a-edcc-4348-aa86-49dda98d4417-kube-api-access-m9rkr\") pod \"21476d9a-edcc-4348-aa86-49dda98d4417\" (UID: \"21476d9a-edcc-4348-aa86-49dda98d4417\") " Oct 2 19:15:40.920229 kubelet[1440]: I1002 19:15:40.917853 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-lib-modules\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920497 kubelet[1440]: I1002 19:15:40.917870 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-xtables-lock\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920497 kubelet[1440]: I1002 19:15:40.917889 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-kernel\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920497 kubelet[1440]: I1002 19:15:40.917913 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21476d9a-edcc-4348-aa86-49dda98d4417-cilium-config-path\") pod \"21476d9a-edcc-4348-aa86-49dda98d4417\" (UID: \"21476d9a-edcc-4348-aa86-49dda98d4417\") " Oct 2 19:15:40.920497 kubelet[1440]: I1002 19:15:40.917931 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-etc-cni-netd\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920497 kubelet[1440]: I1002 19:15:40.917950 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-net\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920497 kubelet[1440]: I1002 19:15:40.917972 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-config-path\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920670 kubelet[1440]: I1002 19:15:40.917991 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2br\" (UniqueName: \"kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-kube-api-access-pv2br\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920670 kubelet[1440]: I1002 19:15:40.918008 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-run\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920670 kubelet[1440]: I1002 19:15:40.918025 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cni-path\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920670 kubelet[1440]: I1002 19:15:40.918032 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.920670 kubelet[1440]: I1002 19:15:40.918042 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-bpf-maps\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920670 kubelet[1440]: I1002 19:15:40.918065 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.920822 kubelet[1440]: I1002 19:15:40.918085 1440 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-cgroup\") pod \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\" (UID: \"03bb2c10-64ac-49ca-aee0-21ba65fb0462\") " Oct 2 19:15:40.920822 kubelet[1440]: I1002 19:15:40.918111 1440 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-bpf-maps\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:40.920822 kubelet[1440]: I1002 19:15:40.918121 1440 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-kernel\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:40.920822 kubelet[1440]: I1002 19:15:40.918137 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.920822 kubelet[1440]: I1002 19:15:40.918166 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hostproc" (OuterVolumeSpecName: "hostproc") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.920822 kubelet[1440]: W1002 19:15:40.918231 1440 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/21476d9a-edcc-4348-aa86-49dda98d4417/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:15:40.920822 kubelet[1440]: W1002 19:15:40.918468 1440 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/03bb2c10-64ac-49ca-aee0-21ba65fb0462/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:15:40.921046 kubelet[1440]: I1002 19:15:40.920379 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.921046 kubelet[1440]: I1002 19:15:40.920425 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.921046 kubelet[1440]: I1002 19:15:40.920422 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21476d9a-edcc-4348-aa86-49dda98d4417-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21476d9a-edcc-4348-aa86-49dda98d4417" (UID: "21476d9a-edcc-4348-aa86-49dda98d4417"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:15:40.921046 kubelet[1440]: I1002 19:15:40.920454 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.921046 kubelet[1440]: I1002 19:15:40.920471 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.921171 kubelet[1440]: I1002 19:15:40.920608 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cni-path" (OuterVolumeSpecName: "cni-path") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.921171 kubelet[1440]: I1002 19:15:40.920661 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:15:40.921242 kubelet[1440]: I1002 19:15:40.921211 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:15:40.921700 kubelet[1440]: I1002 19:15:40.921675 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:15:40.922125 kubelet[1440]: I1002 19:15:40.922096 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:15:40.923610 kubelet[1440]: I1002 19:15:40.923566 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:15:40.924492 kubelet[1440]: I1002 19:15:40.924461 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21476d9a-edcc-4348-aa86-49dda98d4417-kube-api-access-m9rkr" (OuterVolumeSpecName: "kube-api-access-m9rkr") pod "21476d9a-edcc-4348-aa86-49dda98d4417" (UID: "21476d9a-edcc-4348-aa86-49dda98d4417"). InnerVolumeSpecName "kube-api-access-m9rkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:15:40.925009 kubelet[1440]: I1002 19:15:40.924983 1440 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-kube-api-access-pv2br" (OuterVolumeSpecName: "kube-api-access-pv2br") pod "03bb2c10-64ac-49ca-aee0-21ba65fb0462" (UID: "03bb2c10-64ac-49ca-aee0-21ba65fb0462"). InnerVolumeSpecName "kube-api-access-pv2br". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:15:41.018334 kubelet[1440]: I1002 19:15:41.018226 1440 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-cgroup\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018498 kubelet[1440]: I1002 19:15:41.018486 1440 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-clustermesh-secrets\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018558 kubelet[1440]: I1002 19:15:41.018547 1440 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-ipsec-secrets\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018618 kubelet[1440]: I1002 19:15:41.018611 1440 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hostproc\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018724 kubelet[1440]: I1002 19:15:41.018713 1440 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-hubble-tls\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018787 kubelet[1440]: I1002 19:15:41.018779 1440 reconciler.go:399] "Volume detached for volume \"kube-api-access-m9rkr\" (UniqueName: \"kubernetes.io/projected/21476d9a-edcc-4348-aa86-49dda98d4417-kube-api-access-m9rkr\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018854 kubelet[1440]: I1002 19:15:41.018846 1440 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-lib-modules\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018916 kubelet[1440]: I1002 19:15:41.018908 1440 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-xtables-lock\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.018985 kubelet[1440]: I1002 19:15:41.018976 1440 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21476d9a-edcc-4348-aa86-49dda98d4417-cilium-config-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.019048 kubelet[1440]: I1002 19:15:41.019037 1440 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-etc-cni-netd\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.019107 kubelet[1440]: I1002 19:15:41.019098 1440 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-host-proc-sys-net\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.019173 kubelet[1440]: I1002 19:15:41.019164 1440 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-config-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.019237 kubelet[1440]: I1002 19:15:41.019229 1440 reconciler.go:399] "Volume detached for volume \"kube-api-access-pv2br\" (UniqueName: \"kubernetes.io/projected/03bb2c10-64ac-49ca-aee0-21ba65fb0462-kube-api-access-pv2br\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.019292 kubelet[1440]: I1002 19:15:41.019285 1440 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cilium-run\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.019346 kubelet[1440]: I1002 19:15:41.019338 1440 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03bb2c10-64ac-49ca-aee0-21ba65fb0462-cni-path\") on node \"10.0.0.113\" DevicePath \"\"" Oct 2 19:15:41.241607 kubelet[1440]: E1002 19:15:41.241563 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:15:41.265843 systemd[1]: Removed slice kubepods-besteffort-pod21476d9a_edcc_4348_aa86_49dda98d4417.slice. Oct 2 19:15:41.266830 systemd[1]: Removed slice kubepods-burstable-pod03bb2c10_64ac_49ca_aee0_21ba65fb0462.slice. Oct 2 19:15:41.702830 kubelet[1440]: I1002 19:15:41.702804 1440 scope.go:115] "RemoveContainer" containerID="05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2" Oct 2 19:15:41.705070 env[1138]: time="2023-10-02T19:15:41.705026360Z" level=info msg="RemoveContainer for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\"" Oct 2 19:15:41.711073 env[1138]: time="2023-10-02T19:15:41.711027538Z" level=info msg="RemoveContainer for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" returns successfully" Oct 2 19:15:41.711341 kubelet[1440]: I1002 19:15:41.711316 1440 scope.go:115] "RemoveContainer" containerID="05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2" Oct 2 19:15:41.711662 env[1138]: time="2023-10-02T19:15:41.711531819Z" level=error msg="ContainerStatus for \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\": not found" Oct 2 19:15:41.711831 kubelet[1440]: E1002 19:15:41.711816 1440 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\": not found" containerID="05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2" Oct 2 19:15:41.711923 kubelet[1440]: I1002 19:15:41.711912 1440 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2} err="failed to get container status \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\": rpc error: code = NotFound desc = an error occurred when try to find container \"05332a06e10757f6308c3a2dc944f006f9562c148c3a1491ad31b7c44d2f1ff2\": not found" Oct 2 19:15:41.711994 kubelet[1440]: I1002 19:15:41.711983 1440 scope.go:115] "RemoveContainer" containerID="c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765" Oct 2 19:15:41.713069 env[1138]: time="2023-10-02T19:15:41.713041424Z" level=info msg="RemoveContainer for \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\"" Oct 2 19:15:41.721121 env[1138]: time="2023-10-02T19:15:41.721087128Z" level=info msg="RemoveContainer for \"c3120456d0b3a2b1ace6102d7a1d8f3a964561c2e39de32ee7447c1cdbe84765\" returns successfully" Oct 2 19:15:41.744367 systemd[1]: var-lib-kubelet-pods-03bb2c10\x2d64ac\x2d49ca\x2daee0\x2d21ba65fb0462-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpv2br.mount: Deactivated successfully. Oct 2 19:15:41.744463 systemd[1]: var-lib-kubelet-pods-21476d9a\x2dedcc\x2d4348\x2daa86\x2d49dda98d4417-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm9rkr.mount: Deactivated successfully. Oct 2 19:15:41.744522 systemd[1]: var-lib-kubelet-pods-03bb2c10\x2d64ac\x2d49ca\x2daee0\x2d21ba65fb0462-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:15:41.744575 systemd[1]: var-lib-kubelet-pods-03bb2c10\x2d64ac\x2d49ca\x2daee0\x2d21ba65fb0462-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:15:41.744623 systemd[1]: var-lib-kubelet-pods-03bb2c10\x2d64ac\x2d49ca\x2daee0\x2d21ba65fb0462-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:15:42.241942 kubelet[1440]: E1002 19:15:42.241881 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"