Oct 2 19:55:54.747020 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:55:54.747041 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:55:54.747049 kernel: efi: EFI v2.70 by EDK II Oct 2 19:55:54.747055 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:55:54.747060 kernel: random: crng init done Oct 2 19:55:54.747066 kernel: ACPI: Early table checksum verification disabled Oct 2 19:55:54.747072 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:55:54.747079 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:55:54.747085 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747090 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747096 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747101 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747106 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747112 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747120 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747126 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747132 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:55:54.747138 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:55:54.747144 kernel: NUMA: Failed to initialise from firmware Oct 2 19:55:54.747150 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:54.747156 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Oct 2 19:55:54.747162 kernel: Zone ranges: Oct 2 19:55:54.747168 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:54.747175 kernel: DMA32 empty Oct 2 19:55:54.747181 kernel: Normal empty Oct 2 19:55:54.747186 kernel: Movable zone start for each node Oct 2 19:55:54.747192 kernel: Early memory node ranges Oct 2 19:55:54.747198 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:55:54.747204 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:55:54.747210 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:55:54.747216 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:55:54.747222 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:55:54.747228 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:55:54.747234 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:55:54.747239 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:55:54.747247 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:55:54.747253 kernel: psci: probing for conduit method from ACPI. Oct 2 19:55:54.747258 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:55:54.747264 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:55:54.747270 kernel: psci: Trusted OS migration not required Oct 2 19:55:54.747290 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:55:54.747297 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:55:54.747311 kernel: ACPI: SRAT not present Oct 2 19:55:54.747318 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:55:54.747324 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:55:54.747330 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:55:54.747339 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:55:54.747345 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:55:54.747352 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:55:54.747358 kernel: CPU features: detected: Spectre-v4 Oct 2 19:55:54.747364 kernel: CPU features: detected: Spectre-BHB Oct 2 19:55:54.747371 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:55:54.747380 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:55:54.747386 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:55:54.747392 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:55:54.747398 kernel: Policy zone: DMA Oct 2 19:55:54.747406 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:55:54.747413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:55:54.747419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:55:54.747425 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:55:54.747432 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:55:54.747438 kernel: Memory: 2459272K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113016K reserved, 0K cma-reserved) Oct 2 19:55:54.747446 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:55:54.747452 kernel: trace event string verifier disabled Oct 2 19:55:54.747458 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:55:54.747465 kernel: rcu: RCU event tracing is enabled. Oct 2 19:55:54.747472 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:55:54.747478 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:55:54.747485 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:55:54.747491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:55:54.747498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:55:54.747504 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:55:54.747510 kernel: GICv3: 256 SPIs implemented Oct 2 19:55:54.747518 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:55:54.747524 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:55:54.747530 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:55:54.747536 kernel: GICv3: 16 PPIs implemented Oct 2 19:55:54.747543 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:55:54.747549 kernel: ACPI: SRAT not present Oct 2 19:55:54.747555 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:55:54.747561 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:55:54.747568 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:55:54.747574 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:55:54.747580 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:55:54.747587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:54.747595 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:55:54.747601 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:55:54.747608 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:55:54.747614 kernel: arm-pv: using stolen time PV Oct 2 19:55:54.747620 kernel: Console: colour dummy device 80x25 Oct 2 19:55:54.747627 kernel: ACPI: Core revision 20210730 Oct 2 19:55:54.747634 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:55:54.747640 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:55:54.747647 kernel: LSM: Security Framework initializing Oct 2 19:55:54.747653 kernel: SELinux: Initializing. Oct 2 19:55:54.747661 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:55:54.747667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:55:54.747673 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:55:54.747680 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:55:54.747686 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:55:54.747692 kernel: Remapping and enabling EFI services. Oct 2 19:55:54.747699 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:55:54.747706 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:55:54.747713 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:55:54.747721 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:55:54.747727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:54.747734 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:55:54.747740 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:55:54.747747 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:55:54.747754 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:55:54.747761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:54.747767 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:55:54.747773 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:55:54.747780 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:55:54.747788 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:55:54.747794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:55:54.747801 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:55:54.747813 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:55:54.747824 kernel: SMP: Total of 4 processors activated. Oct 2 19:55:54.747832 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:55:54.747839 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:55:54.747845 kernel: CPU features: detected: Common not Private translations Oct 2 19:55:54.747852 kernel: CPU features: detected: CRC32 instructions Oct 2 19:55:54.747859 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:55:54.747866 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:55:54.747873 kernel: CPU features: detected: Privileged Access Never Oct 2 19:55:54.747881 kernel: CPU features: detected: RAS Extension Support Oct 2 19:55:54.747888 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:55:54.747895 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:55:54.747901 kernel: alternatives: patching kernel code Oct 2 19:55:54.747909 kernel: devtmpfs: initialized Oct 2 19:55:54.747916 kernel: KASLR enabled Oct 2 19:55:54.747923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:55:54.747929 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:55:54.747936 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:55:54.747943 kernel: SMBIOS 3.0.0 present. Oct 2 19:55:54.747950 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:55:54.747956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:55:54.747963 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:55:54.747970 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:55:54.747978 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:55:54.747985 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:55:54.747992 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Oct 2 19:55:54.747999 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:55:54.748006 kernel: cpuidle: using governor menu Oct 2 19:55:54.748013 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:55:54.748019 kernel: ASID allocator initialised with 32768 entries Oct 2 19:55:54.748026 kernel: ACPI: bus type PCI registered Oct 2 19:55:54.748033 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:55:54.748041 kernel: Serial: AMBA PL011 UART driver Oct 2 19:55:54.748048 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:55:54.748055 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:55:54.748061 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:55:54.748068 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:55:54.748075 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:55:54.748082 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:55:54.748088 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:55:54.748095 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:55:54.748103 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:55:54.748110 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:55:54.748117 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:55:54.748123 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:55:54.748130 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:55:54.748136 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:55:54.748143 kernel: ACPI: Interpreter enabled Oct 2 19:55:54.748150 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:55:54.748156 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:55:54.748165 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:55:54.748172 kernel: printk: console [ttyAMA0] enabled Oct 2 19:55:54.748179 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:55:54.748336 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:55:54.748409 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:55:54.748472 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:55:54.748534 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:55:54.748601 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:55:54.748610 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:55:54.748617 kernel: PCI host bridge to bus 0000:00 Oct 2 19:55:54.748687 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:55:54.748745 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:55:54.748801 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:55:54.748871 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:55:54.748951 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:55:54.749026 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:55:54.749090 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:55:54.749153 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:55:54.749216 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:55:54.749287 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:55:54.749354 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:55:54.749421 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:55:54.749477 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:55:54.749532 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:55:54.749589 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:55:54.749598 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:55:54.749605 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:55:54.749611 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:55:54.749620 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:55:54.749627 kernel: iommu: Default domain type: Translated Oct 2 19:55:54.749634 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:55:54.749640 kernel: vgaarb: loaded Oct 2 19:55:54.749647 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:55:54.749654 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:55:54.749661 kernel: PTP clock support registered Oct 2 19:55:54.749667 kernel: Registered efivars operations Oct 2 19:55:54.749674 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:55:54.749681 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:55:54.749689 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:55:54.749696 kernel: pnp: PnP ACPI init Oct 2 19:55:54.749763 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:55:54.749773 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:55:54.749779 kernel: NET: Registered PF_INET protocol family Oct 2 19:55:54.749787 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:55:54.749793 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:55:54.749800 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:55:54.749814 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:55:54.749820 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:55:54.749827 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:55:54.749834 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:55:54.749840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:55:54.749847 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:55:54.749853 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:55:54.749860 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:55:54.749868 kernel: kvm [1]: HYP mode not available Oct 2 19:55:54.749874 kernel: Initialise system trusted keyrings Oct 2 19:55:54.749881 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:55:54.749887 kernel: Key type asymmetric registered Oct 2 19:55:54.749893 kernel: Asymmetric key parser 'x509' registered Oct 2 19:55:54.749900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:55:54.749907 kernel: io scheduler mq-deadline registered Oct 2 19:55:54.749913 kernel: io scheduler kyber registered Oct 2 19:55:54.749920 kernel: io scheduler bfq registered Oct 2 19:55:54.749926 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:55:54.749934 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:55:54.749941 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:55:54.750007 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:55:54.750016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:55:54.750023 kernel: thunder_xcv, ver 1.0 Oct 2 19:55:54.750029 kernel: thunder_bgx, ver 1.0 Oct 2 19:55:54.750036 kernel: nicpf, ver 1.0 Oct 2 19:55:54.750042 kernel: nicvf, ver 1.0 Oct 2 19:55:54.750116 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:55:54.750176 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:55:54 UTC (1696276554) Oct 2 19:55:54.750185 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:55:54.750191 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:55:54.750198 kernel: Segment Routing with IPv6 Oct 2 19:55:54.750204 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:55:54.750211 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:55:54.750218 kernel: Key type dns_resolver registered Oct 2 19:55:54.750224 kernel: registered taskstats version 1 Oct 2 19:55:54.750232 kernel: Loading compiled-in X.509 certificates Oct 2 19:55:54.750239 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:55:54.750246 kernel: Key type .fscrypt registered Oct 2 19:55:54.750252 kernel: Key type fscrypt-provisioning registered Oct 2 19:55:54.750259 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:55:54.750265 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:55:54.750272 kernel: ima: No architecture policies found Oct 2 19:55:54.750287 kernel: Freeing unused kernel memory: 34560K Oct 2 19:55:54.750294 kernel: Run /init as init process Oct 2 19:55:54.750302 kernel: with arguments: Oct 2 19:55:54.750308 kernel: /init Oct 2 19:55:54.750315 kernel: with environment: Oct 2 19:55:54.750321 kernel: HOME=/ Oct 2 19:55:54.750328 kernel: TERM=linux Oct 2 19:55:54.750335 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:55:54.750344 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:55:54.750352 systemd[1]: Detected virtualization kvm. Oct 2 19:55:54.750361 systemd[1]: Detected architecture arm64. Oct 2 19:55:54.750368 systemd[1]: Running in initrd. Oct 2 19:55:54.750374 systemd[1]: No hostname configured, using default hostname. Oct 2 19:55:54.750381 systemd[1]: Hostname set to . Oct 2 19:55:54.750389 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:55:54.750396 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:55:54.750403 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:55:54.750410 systemd[1]: Reached target cryptsetup.target. Oct 2 19:55:54.750418 systemd[1]: Reached target paths.target. Oct 2 19:55:54.750425 systemd[1]: Reached target slices.target. Oct 2 19:55:54.750432 systemd[1]: Reached target swap.target. Oct 2 19:55:54.750439 systemd[1]: Reached target timers.target. Oct 2 19:55:54.750446 systemd[1]: Listening on iscsid.socket. Oct 2 19:55:54.750454 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:55:54.750461 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:55:54.750469 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:55:54.750476 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:55:54.750483 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:55:54.750490 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:55:54.750497 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:55:54.750504 systemd[1]: Reached target sockets.target. Oct 2 19:55:54.750511 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:55:54.750518 systemd[1]: Finished network-cleanup.service. Oct 2 19:55:54.750525 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:55:54.750534 systemd[1]: Starting systemd-journald.service... Oct 2 19:55:54.750541 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:55:54.750548 systemd[1]: Starting systemd-resolved.service... Oct 2 19:55:54.750555 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:55:54.750562 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:55:54.750568 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:55:54.750575 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:55:54.750586 systemd-journald[289]: Journal started Oct 2 19:55:54.750642 systemd-journald[289]: Runtime Journal (/run/log/journal/1d2922080dfc46e79ceabbb5be9b64e6) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:55:54.750674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:55:54.745322 systemd-modules-load[290]: Inserted module 'overlay' Oct 2 19:55:54.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.761763 kernel: audit: type=1130 audit(1696276554.758:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.761789 systemd[1]: Started systemd-journald.service. Oct 2 19:55:54.762153 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:55:54.768558 kernel: audit: type=1130 audit(1696276554.761:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.768578 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:55:54.768587 kernel: audit: type=1130 audit(1696276554.765:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.767980 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:55:54.770195 systemd-modules-load[290]: Inserted module 'br_netfilter' Oct 2 19:55:54.771094 kernel: Bridge firewalling registered Oct 2 19:55:54.770272 systemd-resolved[291]: Positive Trust Anchors: Oct 2 19:55:54.770287 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:55:54.770316 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:55:54.775509 systemd-resolved[291]: Defaulting to hostname 'linux'. Oct 2 19:55:54.776435 systemd[1]: Started systemd-resolved.service. Oct 2 19:55:54.783429 kernel: audit: type=1130 audit(1696276554.777:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.783452 kernel: SCSI subsystem initialized Oct 2 19:55:54.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.777902 systemd[1]: Reached target nss-lookup.target. Oct 2 19:55:54.786619 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:55:54.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.788787 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:55:54.790294 kernel: audit: type=1130 audit(1696276554.787:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.792682 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:55:54.792710 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:55:54.793926 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:55:54.797344 systemd-modules-load[290]: Inserted module 'dm_multipath' Oct 2 19:55:54.798212 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:55:54.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.799749 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:55:54.803105 kernel: audit: type=1130 audit(1696276554.798:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.803211 dracut-cmdline[306]: dracut-dracut-053 Oct 2 19:55:54.804981 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:55:54.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.808443 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:55:54.811554 kernel: audit: type=1130 audit(1696276554.808:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.875295 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:55:54.884310 kernel: iscsi: registered transport (tcp) Oct 2 19:55:54.899287 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:55:54.899310 kernel: QLogic iSCSI HBA Driver Oct 2 19:55:54.942921 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:55:54.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.944642 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:55:54.946847 kernel: audit: type=1130 audit(1696276554.943:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:54.992297 kernel: raid6: neonx8 gen() 13350 MB/s Oct 2 19:55:55.009297 kernel: raid6: neonx8 xor() 10167 MB/s Oct 2 19:55:55.026293 kernel: raid6: neonx4 gen() 12698 MB/s Oct 2 19:55:55.043288 kernel: raid6: neonx4 xor() 11036 MB/s Oct 2 19:55:55.060292 kernel: raid6: neonx2 gen() 12446 MB/s Oct 2 19:55:55.077289 kernel: raid6: neonx2 xor() 10201 MB/s Oct 2 19:55:55.094296 kernel: raid6: neonx1 gen() 10333 MB/s Oct 2 19:55:55.111300 kernel: raid6: neonx1 xor() 8573 MB/s Oct 2 19:55:55.128298 kernel: raid6: int64x8 gen() 6095 MB/s Oct 2 19:55:55.145295 kernel: raid6: int64x8 xor() 3517 MB/s Oct 2 19:55:55.162292 kernel: raid6: int64x4 gen() 7224 MB/s Oct 2 19:55:55.179299 kernel: raid6: int64x4 xor() 3760 MB/s Oct 2 19:55:55.196295 kernel: raid6: int64x2 gen() 6087 MB/s Oct 2 19:55:55.213298 kernel: raid6: int64x2 xor() 3277 MB/s Oct 2 19:55:55.230297 kernel: raid6: int64x1 gen() 4910 MB/s Oct 2 19:55:55.247592 kernel: raid6: int64x1 xor() 2552 MB/s Oct 2 19:55:55.247603 kernel: raid6: using algorithm neonx8 gen() 13350 MB/s Oct 2 19:55:55.247611 kernel: raid6: .... xor() 10167 MB/s, rmw enabled Oct 2 19:55:55.247619 kernel: raid6: using neon recovery algorithm Oct 2 19:55:55.259447 kernel: xor: measuring software checksum speed Oct 2 19:55:55.259461 kernel: 8regs : 17293 MB/sec Oct 2 19:55:55.260404 kernel: 32regs : 20739 MB/sec Oct 2 19:55:55.261685 kernel: arm64_neon : 27901 MB/sec Oct 2 19:55:55.261696 kernel: xor: using function: arm64_neon (27901 MB/sec) Oct 2 19:55:55.318300 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:55:55.329900 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:55:55.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:55.331454 systemd[1]: Starting systemd-udevd.service... Oct 2 19:55:55.333823 kernel: audit: type=1130 audit(1696276555.330:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:55.330000 audit: BPF prog-id=7 op=LOAD Oct 2 19:55:55.330000 audit: BPF prog-id=8 op=LOAD Oct 2 19:55:55.347168 systemd-udevd[492]: Using default interface naming scheme 'v252'. Oct 2 19:55:55.351431 systemd[1]: Started systemd-udevd.service. Oct 2 19:55:55.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:55.353090 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:55:55.365760 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Oct 2 19:55:55.398418 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:55:55.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:55.399793 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:55:55.434340 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:55:55.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:55.463877 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:55:55.466300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:55.490307 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (560) Oct 2 19:55:55.491345 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:55:55.496707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:55:55.503033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:55:55.504350 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:55:55.508397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:55:55.509848 systemd[1]: Starting disk-uuid.service... Oct 2 19:55:55.518298 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:56.559297 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:55:56.559376 disk-uuid[568]: The operation has completed successfully. Oct 2 19:55:56.586804 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:55:56.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.586900 systemd[1]: Finished disk-uuid.service. Oct 2 19:55:56.588266 systemd[1]: Starting verity-setup.service... Oct 2 19:55:56.604294 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:55:56.626929 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:55:56.628251 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:55:56.629687 systemd[1]: Finished verity-setup.service. Oct 2 19:55:56.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.675902 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:55:56.676861 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:55:56.676528 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:55:56.677242 systemd[1]: Starting ignition-setup.service... Oct 2 19:55:56.678804 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:55:56.687482 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:55:56.687518 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:55:56.688287 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:55:56.696892 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:55:56.704515 systemd[1]: Finished ignition-setup.service. Oct 2 19:55:56.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.706127 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:55:56.774936 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:55:56.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.775000 audit: BPF prog-id=9 op=LOAD Oct 2 19:55:56.776775 systemd[1]: Starting systemd-networkd.service... Oct 2 19:55:56.790633 ignition[656]: Ignition 2.14.0 Oct 2 19:55:56.790643 ignition[656]: Stage: fetch-offline Oct 2 19:55:56.790680 ignition[656]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:56.790690 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:56.790827 ignition[656]: parsed url from cmdline: "" Oct 2 19:55:56.790831 ignition[656]: no config URL provided Oct 2 19:55:56.790835 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:55:56.790842 ignition[656]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:55:56.790860 ignition[656]: op(1): [started] loading QEMU firmware config module Oct 2 19:55:56.790865 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:55:56.804182 systemd-networkd[745]: lo: Link UP Oct 2 19:55:56.804193 systemd-networkd[745]: lo: Gained carrier Oct 2 19:55:56.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.804568 systemd-networkd[745]: Enumeration completed Oct 2 19:55:56.804670 systemd[1]: Started systemd-networkd.service. Oct 2 19:55:56.804749 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:55:56.805464 systemd[1]: Reached target network.target. Oct 2 19:55:56.807269 systemd[1]: Starting iscsiuio.service... Oct 2 19:55:56.807750 systemd-networkd[745]: eth0: Link UP Oct 2 19:55:56.807753 systemd-networkd[745]: eth0: Gained carrier Oct 2 19:55:56.814396 ignition[656]: op(1): [finished] loading QEMU firmware config module Oct 2 19:55:56.816860 systemd[1]: Started iscsiuio.service. Oct 2 19:55:56.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.818493 systemd[1]: Starting iscsid.service... Oct 2 19:55:56.822572 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:55:56.822572 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:55:56.822572 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:55:56.822572 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:55:56.822572 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:55:56.822572 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:55:56.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.826097 systemd[1]: Started iscsid.service. Oct 2 19:55:56.829684 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:55:56.831357 systemd-networkd[745]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:55:56.841285 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:55:56.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.842130 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:55:56.843356 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:55:56.844607 systemd[1]: Reached target remote-fs.target. Oct 2 19:55:56.846634 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:55:56.850052 ignition[656]: parsing config with SHA512: d178c1fea6a303fda6d7ecc66b095fda4ba1f29c6c7729e3ec43fe12c83a46fa0284fdd843162af0b12e9c362cefadf3985b20ccb283610c161a40bd2649f26b Oct 2 19:55:56.855477 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:55:56.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.880768 unknown[656]: fetched base config from "system" Oct 2 19:55:56.880782 unknown[656]: fetched user config from "qemu" Oct 2 19:55:56.881261 ignition[656]: fetch-offline: fetch-offline passed Oct 2 19:55:56.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.881561 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.12 Oct 2 19:55:56.881346 ignition[656]: Ignition finished successfully Oct 2 19:55:56.881569 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Oct 2 19:55:56.882448 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:55:56.883643 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:55:56.884445 systemd[1]: Starting ignition-kargs.service... Oct 2 19:55:56.894471 ignition[766]: Ignition 2.14.0 Oct 2 19:55:56.894482 ignition[766]: Stage: kargs Oct 2 19:55:56.894576 ignition[766]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:56.896588 systemd[1]: Finished ignition-kargs.service. Oct 2 19:55:56.894587 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:56.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.898363 systemd[1]: Starting ignition-disks.service... Oct 2 19:55:56.895354 ignition[766]: kargs: kargs passed Oct 2 19:55:56.895394 ignition[766]: Ignition finished successfully Oct 2 19:55:56.906883 ignition[772]: Ignition 2.14.0 Oct 2 19:55:56.906898 ignition[772]: Stage: disks Oct 2 19:55:56.906995 ignition[772]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:56.907006 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:56.908608 systemd[1]: Finished ignition-disks.service. Oct 2 19:55:56.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.907856 ignition[772]: disks: disks passed Oct 2 19:55:56.910084 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:55:56.907902 ignition[772]: Ignition finished successfully Oct 2 19:55:56.910934 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:55:56.911804 systemd[1]: Reached target local-fs.target. Oct 2 19:55:56.912776 systemd[1]: Reached target sysinit.target. Oct 2 19:55:56.913684 systemd[1]: Reached target basic.target. Oct 2 19:55:56.915697 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:55:56.929404 systemd-fsck[780]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:55:56.933885 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:55:56.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:56.935486 systemd[1]: Mounting sysroot.mount... Oct 2 19:55:56.942299 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:55:56.942699 systemd[1]: Mounted sysroot.mount. Oct 2 19:55:56.943259 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:55:56.946105 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:55:56.946895 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:55:56.946949 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:55:56.946975 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:55:56.949743 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:55:56.951455 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:55:56.957161 initrd-setup-root[790]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:55:56.962066 initrd-setup-root[798]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:55:56.966365 initrd-setup-root[806]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:55:56.970769 initrd-setup-root[814]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:55:56.999840 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:55:57.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:57.001197 systemd[1]: Starting ignition-mount.service... Oct 2 19:55:57.002381 systemd[1]: Starting sysroot-boot.service... Oct 2 19:55:57.007432 bash[831]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:55:57.017492 ignition[832]: INFO : Ignition 2.14.0 Oct 2 19:55:57.017492 ignition[832]: INFO : Stage: mount Oct 2 19:55:57.019237 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:57.019237 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:57.019237 ignition[832]: INFO : mount: mount passed Oct 2 19:55:57.019237 ignition[832]: INFO : Ignition finished successfully Oct 2 19:55:57.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:57.021506 systemd[1]: Finished ignition-mount.service. Oct 2 19:55:57.027723 systemd[1]: Finished sysroot-boot.service. Oct 2 19:55:57.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:57.636645 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:55:57.643487 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (841) Oct 2 19:55:57.643519 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:55:57.643529 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:55:57.644417 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:55:57.647149 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:55:57.648768 systemd[1]: Starting ignition-files.service... Oct 2 19:55:57.665528 ignition[861]: INFO : Ignition 2.14.0 Oct 2 19:55:57.665528 ignition[861]: INFO : Stage: files Oct 2 19:55:57.666912 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:57.666912 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:57.666912 ignition[861]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:55:57.669709 ignition[861]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:55:57.669709 ignition[861]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:55:57.671739 ignition[861]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:55:57.671739 ignition[861]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:55:57.673656 ignition[861]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:55:57.673656 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:55:57.673656 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:55:57.672186 unknown[861]: wrote ssh authorized keys file for user: core Oct 2 19:55:57.837144 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:55:58.001623 systemd-networkd[745]: eth0: Gained IPv6LL Oct 2 19:55:58.039238 ignition[861]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:55:58.042685 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:55:58.042685 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:55:58.042685 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:55:58.296926 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:55:58.423631 ignition[861]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Oct 2 19:55:58.425715 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:55:58.425715 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:55:58.425715 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:55:58.473508 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:55:58.752384 ignition[861]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Oct 2 19:55:58.754796 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:55:58.754796 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:55:58.754796 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:55:58.792187 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:55:59.474963 ignition[861]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Oct 2 19:55:59.477168 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:55:59.477168 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:59.477168 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:55:59.477168 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:59.477168 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:55:59.477168 ignition[861]: INFO : files: op(9): [started] processing unit "coreos-metadata.service" Oct 2 19:55:59.477168 ignition[861]: INFO : files: op(9): op(a): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(9): op(a): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(9): [finished] processing unit "coreos-metadata.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(b): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(b): op(c): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(b): op(c): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(b): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(d): [started] processing unit "prepare-critools.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(d): op(e): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(d): op(e): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(d): [finished] processing unit "prepare-critools.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:55:59.485885 ignition[861]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:55:59.525618 ignition[861]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:55:59.526728 ignition[861]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:55:59.526728 ignition[861]: INFO : files: op(11): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:59.526728 ignition[861]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:55:59.526728 ignition[861]: INFO : files: op(12): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:59.526728 ignition[861]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:55:59.526728 ignition[861]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:59.526728 ignition[861]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:55:59.526728 ignition[861]: INFO : files: files passed Oct 2 19:55:59.544394 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:55:59.544416 kernel: audit: type=1130 audit(1696276559.529:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.544427 kernel: audit: type=1130 audit(1696276559.536:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.544436 kernel: audit: type=1131 audit(1696276559.537:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.544446 kernel: audit: type=1130 audit(1696276559.541:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.544551 ignition[861]: INFO : Ignition finished successfully Oct 2 19:55:59.527772 systemd[1]: Finished ignition-files.service. Oct 2 19:55:59.530261 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:55:59.547644 initrd-setup-root-after-ignition[887]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:55:59.530890 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:55:59.550373 initrd-setup-root-after-ignition[889]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:55:59.531524 systemd[1]: Starting ignition-quench.service... Oct 2 19:55:59.535395 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:55:59.535472 systemd[1]: Finished ignition-quench.service. Oct 2 19:55:59.539097 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:55:59.542359 systemd[1]: Reached target ignition-complete.target. Oct 2 19:55:59.545585 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:55:59.560007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:55:59.560100 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:55:59.565350 kernel: audit: type=1130 audit(1696276559.561:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.565379 kernel: audit: type=1131 audit(1696276559.561:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.561562 systemd[1]: Reached target initrd-fs.target. Oct 2 19:55:59.566058 systemd[1]: Reached target initrd.target. Oct 2 19:55:59.567340 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:55:59.568128 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:55:59.580046 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:55:59.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.581463 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:55:59.583646 kernel: audit: type=1130 audit(1696276559.580:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.590258 systemd[1]: Stopped target network.target. Oct 2 19:55:59.590899 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:55:59.591993 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:55:59.592958 systemd[1]: Stopped target timers.target. Oct 2 19:55:59.593853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:55:59.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.593968 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:55:59.598172 kernel: audit: type=1131 audit(1696276559.594:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.595063 systemd[1]: Stopped target initrd.target. Oct 2 19:55:59.597852 systemd[1]: Stopped target basic.target. Oct 2 19:55:59.598913 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:55:59.599837 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:55:59.600753 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:55:59.601780 systemd[1]: Stopped target remote-fs.target. Oct 2 19:55:59.602737 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:55:59.603767 systemd[1]: Stopped target sysinit.target. Oct 2 19:55:59.604843 systemd[1]: Stopped target local-fs.target. Oct 2 19:55:59.605778 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:55:59.606717 systemd[1]: Stopped target swap.target. Oct 2 19:55:59.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.607775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:55:59.612144 kernel: audit: type=1131 audit(1696276559.608:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.607893 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:55:59.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.608831 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:55:59.616364 kernel: audit: type=1131 audit(1696276559.612:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.611367 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:55:59.611468 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:55:59.613191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:55:59.613313 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:55:59.616078 systemd[1]: Stopped target paths.target. Oct 2 19:55:59.616847 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:55:59.622322 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:55:59.623269 systemd[1]: Stopped target slices.target. Oct 2 19:55:59.624471 systemd[1]: Stopped target sockets.target. Oct 2 19:55:59.625525 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:55:59.625601 systemd[1]: Closed iscsid.socket. Oct 2 19:55:59.626361 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:55:59.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.626431 systemd[1]: Closed iscsiuio.socket. Oct 2 19:55:59.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.627382 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:55:59.627484 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:55:59.628635 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:55:59.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.628730 systemd[1]: Stopped ignition-files.service. Oct 2 19:55:59.630438 systemd[1]: Stopping ignition-mount.service... Oct 2 19:55:59.631083 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:55:59.631203 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:55:59.633202 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:55:59.634125 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:55:59.635428 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:55:59.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.636489 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:55:59.641830 ignition[902]: INFO : Ignition 2.14.0 Oct 2 19:55:59.641830 ignition[902]: INFO : Stage: umount Oct 2 19:55:59.641830 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:55:59.641830 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:55:59.641830 ignition[902]: INFO : umount: umount passed Oct 2 19:55:59.641830 ignition[902]: INFO : Ignition finished successfully Oct 2 19:55:59.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.636692 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:55:59.637723 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:55:59.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.637994 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:55:59.639643 systemd-networkd[745]: eth0: DHCPv6 lease lost Oct 2 19:55:59.645144 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:55:59.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.646248 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:55:59.646374 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:55:59.647924 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:55:59.648028 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:55:59.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.656000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:55:59.656000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:55:59.650575 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:55:59.650674 systemd[1]: Stopped ignition-mount.service. Oct 2 19:55:59.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.654095 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:55:59.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.654198 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:55:59.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.655927 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:55:59.655963 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:55:59.656955 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:55:59.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.657074 systemd[1]: Stopped ignition-disks.service. Oct 2 19:55:59.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.658938 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:55:59.659031 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:55:59.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.659951 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:55:59.659989 systemd[1]: Stopped ignition-setup.service. Oct 2 19:55:59.662081 systemd[1]: Stopping network-cleanup.service... Oct 2 19:55:59.662843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:55:59.662898 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:55:59.664211 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:55:59.664271 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:55:59.666259 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:55:59.666324 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:55:59.667671 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:55:59.672067 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:55:59.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.675591 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:55:59.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.675690 systemd[1]: Stopped network-cleanup.service. Oct 2 19:55:59.676602 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:55:59.676707 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:55:59.677834 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:55:59.677868 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:55:59.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.678922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:55:59.678948 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:55:59.680898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:55:59.680940 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:55:59.682175 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:55:59.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.682212 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:55:59.683983 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:55:59.684031 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:55:59.685604 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:55:59.686372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:55:59.686421 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:55:59.693259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:55:59.693432 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:55:59.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.724133 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:55:59.724235 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:55:59.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.725563 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:55:59.726513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:55:59.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:55:59.726566 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:55:59.728399 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:55:59.736066 systemd[1]: Switching root. Oct 2 19:55:59.761870 iscsid[752]: iscsid shutting down. Oct 2 19:55:59.762390 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Oct 2 19:55:59.762424 systemd-journald[289]: Journal stopped Oct 2 19:56:01.924101 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:56:01.924190 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:56:01.924202 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:56:01.924212 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:56:01.924222 kernel: SELinux: policy capability open_perms=1 Oct 2 19:56:01.924235 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:56:01.924246 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:56:01.924256 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:56:01.924269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:56:01.924293 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:56:01.924306 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:56:01.924318 systemd[1]: Successfully loaded SELinux policy in 33.666ms. Oct 2 19:56:01.924365 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.526ms. Oct 2 19:56:01.924377 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:56:01.924388 systemd[1]: Detected virtualization kvm. Oct 2 19:56:01.924398 systemd[1]: Detected architecture arm64. Oct 2 19:56:01.924410 systemd[1]: Detected first boot. Oct 2 19:56:01.924420 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:56:01.924431 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:56:01.924442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:01.924455 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:01.924468 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:01.924480 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:56:01.924492 systemd[1]: Stopped iscsiuio.service. Oct 2 19:56:01.924503 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:56:01.924513 systemd[1]: Stopped iscsid.service. Oct 2 19:56:01.924524 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:56:01.924534 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:56:01.924555 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:56:01.924566 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:56:01.924577 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:56:01.924588 systemd[1]: Created slice system-getty.slice. Oct 2 19:56:01.924598 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:56:01.924609 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:56:01.924619 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:56:01.924630 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:56:01.924640 systemd[1]: Created slice user.slice. Oct 2 19:56:01.924652 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:56:01.924671 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:56:01.924682 systemd[1]: Set up automount boot.automount. Oct 2 19:56:01.924693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:56:01.924709 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:56:01.924730 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:56:01.924741 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:56:01.924753 systemd[1]: Reached target integritysetup.target. Oct 2 19:56:01.924764 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:56:01.924780 systemd[1]: Reached target remote-fs.target. Oct 2 19:56:01.924791 systemd[1]: Reached target slices.target. Oct 2 19:56:01.924802 systemd[1]: Reached target swap.target. Oct 2 19:56:01.924813 systemd[1]: Reached target torcx.target. Oct 2 19:56:01.924824 systemd[1]: Reached target veritysetup.target. Oct 2 19:56:01.924834 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:56:01.924845 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:56:01.924855 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:56:01.924867 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:56:01.924878 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:56:01.924889 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:56:01.924899 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:56:01.924909 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:56:01.924920 systemd[1]: Mounting media.mount... Oct 2 19:56:01.924930 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:56:01.924941 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:56:01.924952 systemd[1]: Mounting tmp.mount... Oct 2 19:56:01.924964 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:56:01.924975 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.924985 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:56:01.924997 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:56:01.925008 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:56:01.925018 systemd[1]: Starting modprobe@drm.service... Oct 2 19:56:01.925029 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:56:01.925039 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:56:01.925051 systemd[1]: Starting modprobe@loop.service... Oct 2 19:56:01.925065 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:56:01.925076 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:56:01.925087 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:56:01.925097 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:56:01.925107 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:56:01.925117 systemd[1]: Stopped systemd-journald.service. Oct 2 19:56:01.925128 systemd[1]: Starting systemd-journald.service... Oct 2 19:56:01.925138 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:56:01.925149 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:56:01.925161 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:56:01.925172 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:56:01.925182 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:56:01.925459 systemd[1]: Stopped verity-setup.service. Oct 2 19:56:01.925481 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:56:01.925492 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:56:01.925503 kernel: loop: module loaded Oct 2 19:56:01.925514 systemd[1]: Mounted media.mount. Oct 2 19:56:01.925525 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:56:01.925540 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:56:01.925552 systemd[1]: Mounted tmp.mount. Oct 2 19:56:01.925563 kernel: fuse: init (API version 7.34) Oct 2 19:56:01.925573 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:56:01.925584 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:56:01.925595 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:56:01.925606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:56:01.925618 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:56:01.925629 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:56:01.925639 systemd[1]: Finished modprobe@drm.service. Oct 2 19:56:01.925650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:56:01.925661 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:56:01.925671 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:56:01.925682 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:56:01.925695 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:56:01.925706 systemd[1]: Finished modprobe@loop.service. Oct 2 19:56:01.925716 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:56:01.925728 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:56:01.925742 systemd-journald[999]: Journal started Oct 2 19:56:01.925801 systemd-journald[999]: Runtime Journal (/run/log/journal/1d2922080dfc46e79ceabbb5be9b64e6) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:55:59.828000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:55:59.995000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:59.995000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:55:59.996000 audit: BPF prog-id=10 op=LOAD Oct 2 19:55:59.996000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:55:59.996000 audit: BPF prog-id=11 op=LOAD Oct 2 19:55:59.996000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:56:01.763000 audit: BPF prog-id=12 op=LOAD Oct 2 19:56:01.763000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:56:01.763000 audit: BPF prog-id=13 op=LOAD Oct 2 19:56:01.763000 audit: BPF prog-id=14 op=LOAD Oct 2 19:56:01.763000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:56:01.763000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:56:01.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.776000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:56:01.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.874000 audit: BPF prog-id=15 op=LOAD Oct 2 19:56:01.874000 audit: BPF prog-id=16 op=LOAD Oct 2 19:56:01.874000 audit: BPF prog-id=17 op=LOAD Oct 2 19:56:01.874000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:56:01.874000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:56:01.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.922000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:56:01.922000 audit[999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd8e62d50 a2=4000 a3=1 items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:01.922000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:56:01.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.761798 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:56:00.045666 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:01.761811 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:56:00.046186 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:56:01.765247 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:56:00.046206 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:56:00.046239 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:56:00.046248 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:56:00.046288 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:56:00.046303 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:56:00.046506 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:56:00.046544 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:56:00.046557 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:56:00.047037 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:56:01.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:00.047078 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:56:00.047097 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:56:00.047111 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:56:01.928304 systemd[1]: Started systemd-journald.service. Oct 2 19:56:00.047128 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:56:00.047141 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:56:01.507579 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:01Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:01.507865 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:01Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:01.928591 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:56:01.507971 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:01Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:01.508131 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:01Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:56:01.508181 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:01Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:56:01.508239 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2023-10-02T19:56:01Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:56:01.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.930003 systemd[1]: Reached target network-pre.target. Oct 2 19:56:01.932079 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:56:01.934097 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:56:01.934849 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:56:01.938437 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:56:01.940265 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:56:01.940945 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:56:01.942618 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:56:01.948065 systemd-journald[999]: Time spent on flushing to /var/log/journal/1d2922080dfc46e79ceabbb5be9b64e6 is 14.919ms for 972 entries. Oct 2 19:56:01.948065 systemd-journald[999]: System Journal (/var/log/journal/1d2922080dfc46e79ceabbb5be9b64e6) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:56:01.987053 systemd-journald[999]: Received client request to flush runtime journal. Oct 2 19:56:01.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.943332 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:56:01.944722 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:56:01.949411 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:56:01.950856 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:56:01.988622 udevadm[1039]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:56:01.953682 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:56:01.954539 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:56:01.957389 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:56:01.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:01.958652 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:56:01.961034 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:56:01.962933 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:56:01.964048 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:56:01.988265 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:56:01.994557 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:56:01.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.341424 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:56:02.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.343000 audit: BPF prog-id=18 op=LOAD Oct 2 19:56:02.343000 audit: BPF prog-id=19 op=LOAD Oct 2 19:56:02.343000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:56:02.343000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:56:02.344141 systemd[1]: Starting systemd-udevd.service... Oct 2 19:56:02.364970 systemd-udevd[1041]: Using default interface naming scheme 'v252'. Oct 2 19:56:02.393535 systemd[1]: Started systemd-udevd.service. Oct 2 19:56:02.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.394000 audit: BPF prog-id=20 op=LOAD Oct 2 19:56:02.395936 systemd[1]: Starting systemd-networkd.service... Oct 2 19:56:02.422882 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:56:02.428000 audit: BPF prog-id=21 op=LOAD Oct 2 19:56:02.428000 audit: BPF prog-id=22 op=LOAD Oct 2 19:56:02.428000 audit: BPF prog-id=23 op=LOAD Oct 2 19:56:02.429816 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:56:02.470925 systemd[1]: Started systemd-userdbd.service. Oct 2 19:56:02.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.498459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:56:02.513821 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:56:02.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.516249 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:56:02.553925 systemd-networkd[1048]: lo: Link UP Oct 2 19:56:02.553935 systemd-networkd[1048]: lo: Gained carrier Oct 2 19:56:02.554300 systemd-networkd[1048]: Enumeration completed Oct 2 19:56:02.554416 systemd[1]: Started systemd-networkd.service. Oct 2 19:56:02.554419 systemd-networkd[1048]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:56:02.555102 lvm[1074]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:56:02.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.556165 systemd-networkd[1048]: eth0: Link UP Oct 2 19:56:02.556174 systemd-networkd[1048]: eth0: Gained carrier Oct 2 19:56:02.576436 systemd-networkd[1048]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:56:02.589223 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:56:02.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.590090 systemd[1]: Reached target cryptsetup.target. Oct 2 19:56:02.592113 systemd[1]: Starting lvm2-activation.service... Oct 2 19:56:02.596459 lvm[1075]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:56:02.631270 systemd[1]: Finished lvm2-activation.service. Oct 2 19:56:02.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.632264 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:56:02.633187 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:56:02.633224 systemd[1]: Reached target local-fs.target. Oct 2 19:56:02.633833 systemd[1]: Reached target machines.target. Oct 2 19:56:02.635596 systemd[1]: Starting ldconfig.service... Oct 2 19:56:02.636537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:56:02.636673 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:02.637971 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:56:02.640216 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:56:02.642411 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:56:02.643439 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:56:02.643506 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:56:02.644603 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:56:02.645488 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1077 (bootctl) Oct 2 19:56:02.649976 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:56:02.653858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:56:02.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.665912 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:56:02.667433 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:56:02.676237 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:56:02.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.791137 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:56:02.808298 systemd-fsck[1085]: fsck.fat 4.2 (2021-01-31) Oct 2 19:56:02.808298 systemd-fsck[1085]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:56:02.810798 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:56:02.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.893143 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:56:02.894756 systemd[1]: Mounting boot.mount... Oct 2 19:56:02.903761 ldconfig[1076]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:56:02.904641 systemd[1]: Mounted boot.mount. Oct 2 19:56:02.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.908309 systemd[1]: Finished ldconfig.service. Oct 2 19:56:02.912211 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:56:02.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.970355 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:56:02.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.972341 systemd[1]: Starting audit-rules.service... Oct 2 19:56:02.973975 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:56:02.975800 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:56:02.976000 audit: BPF prog-id=24 op=LOAD Oct 2 19:56:02.983096 systemd[1]: Starting systemd-resolved.service... Oct 2 19:56:02.984000 audit: BPF prog-id=25 op=LOAD Oct 2 19:56:02.986107 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:56:02.988560 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:56:02.990659 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:56:02.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:02.991727 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:56:03.000000 audit[1099]: SYSTEM_BOOT pid=1099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.002910 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:56:03.004107 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:56:03.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.006457 systemd[1]: Starting systemd-update-done.service... Oct 2 19:56:03.014071 systemd[1]: Finished systemd-update-done.service. Oct 2 19:56:03.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:03.038126 augenrules[1109]: No rules Oct 2 19:56:03.037000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:56:03.037000 audit[1109]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd58f2d40 a2=420 a3=0 items=0 ppid=1088 pid=1109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:03.037000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:56:03.039317 systemd[1]: Finished audit-rules.service. Oct 2 19:56:03.045537 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:56:03.046346 systemd-timesyncd[1093]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:56:03.046400 systemd-timesyncd[1093]: Initial clock synchronization to Mon 2023-10-02 19:56:03.005208 UTC. Oct 2 19:56:03.046711 systemd[1]: Reached target time-set.target. Oct 2 19:56:03.046847 systemd-resolved[1092]: Positive Trust Anchors: Oct 2 19:56:03.046854 systemd-resolved[1092]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:56:03.046881 systemd-resolved[1092]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:56:03.059442 systemd-resolved[1092]: Defaulting to hostname 'linux'. Oct 2 19:56:03.061136 systemd[1]: Started systemd-resolved.service. Oct 2 19:56:03.061864 systemd[1]: Reached target network.target. Oct 2 19:56:03.062410 systemd[1]: Reached target nss-lookup.target. Oct 2 19:56:03.062956 systemd[1]: Reached target sysinit.target. Oct 2 19:56:03.063558 systemd[1]: Started motdgen.path. Oct 2 19:56:03.064062 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:56:03.065076 systemd[1]: Started logrotate.timer. Oct 2 19:56:03.065715 systemd[1]: Started mdadm.timer. Oct 2 19:56:03.066188 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:56:03.066806 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:56:03.066833 systemd[1]: Reached target paths.target. Oct 2 19:56:03.067344 systemd[1]: Reached target timers.target. Oct 2 19:56:03.068217 systemd[1]: Listening on dbus.socket. Oct 2 19:56:03.070114 systemd[1]: Starting docker.socket... Oct 2 19:56:03.073455 systemd[1]: Listening on sshd.socket. Oct 2 19:56:03.074094 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:03.074608 systemd[1]: Listening on docker.socket. Oct 2 19:56:03.075247 systemd[1]: Reached target sockets.target. Oct 2 19:56:03.075870 systemd[1]: Reached target basic.target. Oct 2 19:56:03.076427 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:56:03.076455 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:56:03.077428 systemd[1]: Starting containerd.service... Oct 2 19:56:03.079000 systemd[1]: Starting dbus.service... Oct 2 19:56:03.080514 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:56:03.082077 systemd[1]: Starting extend-filesystems.service... Oct 2 19:56:03.082819 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:56:03.083894 systemd[1]: Starting motdgen.service... Oct 2 19:56:03.086467 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:56:03.089433 systemd[1]: Starting prepare-critools.service... Oct 2 19:56:03.089564 jq[1119]: false Oct 2 19:56:03.091120 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:56:03.092886 systemd[1]: Starting sshd-keygen.service... Oct 2 19:56:03.095582 systemd[1]: Starting systemd-logind.service... Oct 2 19:56:03.096564 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:56:03.096617 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:56:03.097145 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:56:03.097990 systemd[1]: Starting update-engine.service... Oct 2 19:56:03.099460 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:56:03.102090 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:56:03.103541 jq[1135]: true Oct 2 19:56:03.102244 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:56:03.104940 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:56:03.105096 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:56:03.116942 jq[1140]: true Oct 2 19:56:03.122067 extend-filesystems[1120]: Found vda Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda1 Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda2 Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda3 Oct 2 19:56:03.123082 extend-filesystems[1120]: Found usr Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda4 Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda6 Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda7 Oct 2 19:56:03.123082 extend-filesystems[1120]: Found vda9 Oct 2 19:56:03.123082 extend-filesystems[1120]: Checking size of /dev/vda9 Oct 2 19:56:03.137465 tar[1137]: ./ Oct 2 19:56:03.137465 tar[1137]: ./macvlan Oct 2 19:56:03.132674 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:56:03.137760 tar[1138]: crictl Oct 2 19:56:03.132842 systemd[1]: Finished motdgen.service. Oct 2 19:56:03.144626 dbus-daemon[1118]: [system] SELinux support is enabled Oct 2 19:56:03.146362 systemd[1]: Started dbus.service. Oct 2 19:56:03.148562 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:56:03.148591 systemd[1]: Reached target system-config.target. Oct 2 19:56:03.149223 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:56:03.149253 systemd[1]: Reached target user-config.target. Oct 2 19:56:03.171354 extend-filesystems[1120]: Old size kept for /dev/vda9 Oct 2 19:56:03.173059 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:56:03.173228 systemd[1]: Finished extend-filesystems.service. Oct 2 19:56:03.177163 bash[1169]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:56:03.177983 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:56:03.179851 systemd-logind[1130]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:56:03.185464 systemd-logind[1130]: New seat seat0. Oct 2 19:56:03.192089 systemd[1]: Started systemd-logind.service. Oct 2 19:56:03.206215 tar[1137]: ./static Oct 2 19:56:03.210954 update_engine[1134]: I1002 19:56:03.210664 1134 main.cc:92] Flatcar Update Engine starting Oct 2 19:56:03.213019 systemd[1]: Started update-engine.service. Oct 2 19:56:03.213120 update_engine[1134]: I1002 19:56:03.213021 1134 update_check_scheduler.cc:74] Next update check in 8m22s Oct 2 19:56:03.215471 systemd[1]: Started locksmithd.service. Oct 2 19:56:03.235310 tar[1137]: ./vlan Oct 2 19:56:03.247944 env[1141]: time="2023-10-02T19:56:03.247889640Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:56:03.279714 tar[1137]: ./portmap Oct 2 19:56:03.283959 env[1141]: time="2023-10-02T19:56:03.282682760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:56:03.283959 env[1141]: time="2023-10-02T19:56:03.282858400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:03.286603 env[1141]: time="2023-10-02T19:56:03.286552200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:03.286603 env[1141]: time="2023-10-02T19:56:03.286593760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.286826280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.286859600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.286874600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.286884160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.286957120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.287247600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.287388000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.287404760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.287456480Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:56:03.291988 env[1141]: time="2023-10-02T19:56:03.287468400Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:56:03.292712 env[1141]: time="2023-10-02T19:56:03.292670400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:56:03.292712 env[1141]: time="2023-10-02T19:56:03.292709440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:56:03.292784 env[1141]: time="2023-10-02T19:56:03.292723240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:56:03.292784 env[1141]: time="2023-10-02T19:56:03.292754800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.292784 env[1141]: time="2023-10-02T19:56:03.292777600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.292852 env[1141]: time="2023-10-02T19:56:03.292794360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.292852 env[1141]: time="2023-10-02T19:56:03.292807480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.293190 env[1141]: time="2023-10-02T19:56:03.293156360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.293190 env[1141]: time="2023-10-02T19:56:03.293185960Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.293252 env[1141]: time="2023-10-02T19:56:03.293201520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.293252 env[1141]: time="2023-10-02T19:56:03.293214240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.293252 env[1141]: time="2023-10-02T19:56:03.293226480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:56:03.293398 env[1141]: time="2023-10-02T19:56:03.293376200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:56:03.293477 env[1141]: time="2023-10-02T19:56:03.293458480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:56:03.293687 env[1141]: time="2023-10-02T19:56:03.293668840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:56:03.293725 env[1141]: time="2023-10-02T19:56:03.293696800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293725 env[1141]: time="2023-10-02T19:56:03.293710320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:56:03.293850 env[1141]: time="2023-10-02T19:56:03.293833880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293880 env[1141]: time="2023-10-02T19:56:03.293851280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293880 env[1141]: time="2023-10-02T19:56:03.293864840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293880 env[1141]: time="2023-10-02T19:56:03.293876120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293937 env[1141]: time="2023-10-02T19:56:03.293888600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293937 env[1141]: time="2023-10-02T19:56:03.293900480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293937 env[1141]: time="2023-10-02T19:56:03.293911400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293937 env[1141]: time="2023-10-02T19:56:03.293922720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.293937 env[1141]: time="2023-10-02T19:56:03.293935000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:56:03.294076 env[1141]: time="2023-10-02T19:56:03.294052400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.294076 env[1141]: time="2023-10-02T19:56:03.294066960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.294115 env[1141]: time="2023-10-02T19:56:03.294079160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.294115 env[1141]: time="2023-10-02T19:56:03.294091120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:56:03.294115 env[1141]: time="2023-10-02T19:56:03.294104560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:56:03.294115 env[1141]: time="2023-10-02T19:56:03.294114480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:56:03.294189 env[1141]: time="2023-10-02T19:56:03.294131760Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:56:03.294189 env[1141]: time="2023-10-02T19:56:03.294167520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:56:03.294433 env[1141]: time="2023-10-02T19:56:03.294380160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:56:03.295131 env[1141]: time="2023-10-02T19:56:03.294440040Z" level=info msg="Connect containerd service" Oct 2 19:56:03.295131 env[1141]: time="2023-10-02T19:56:03.294475040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:56:03.295131 env[1141]: time="2023-10-02T19:56:03.295077440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:56:03.295452 env[1141]: time="2023-10-02T19:56:03.295429680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:56:03.295491 env[1141]: time="2023-10-02T19:56:03.295476720Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:56:03.296471 env[1141]: time="2023-10-02T19:56:03.295525080Z" level=info msg="containerd successfully booted in 0.048683s" Oct 2 19:56:03.295607 systemd[1]: Started containerd.service. Oct 2 19:56:03.305496 env[1141]: time="2023-10-02T19:56:03.305440560Z" level=info msg="Start subscribing containerd event" Oct 2 19:56:03.305571 env[1141]: time="2023-10-02T19:56:03.305511520Z" level=info msg="Start recovering state" Oct 2 19:56:03.305615 env[1141]: time="2023-10-02T19:56:03.305601280Z" level=info msg="Start event monitor" Oct 2 19:56:03.305668 env[1141]: time="2023-10-02T19:56:03.305650480Z" level=info msg="Start snapshots syncer" Oct 2 19:56:03.305696 env[1141]: time="2023-10-02T19:56:03.305669680Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:56:03.305696 env[1141]: time="2023-10-02T19:56:03.305678200Z" level=info msg="Start streaming server" Oct 2 19:56:03.320993 tar[1137]: ./host-local Oct 2 19:56:03.347774 tar[1137]: ./vrf Oct 2 19:56:03.371636 locksmithd[1172]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:56:03.377423 tar[1137]: ./bridge Oct 2 19:56:03.411822 tar[1137]: ./tuning Oct 2 19:56:03.439943 tar[1137]: ./firewall Oct 2 19:56:03.475332 tar[1137]: ./host-device Oct 2 19:56:03.506560 tar[1137]: ./sbr Oct 2 19:56:03.527829 systemd[1]: Finished prepare-critools.service. Oct 2 19:56:03.535125 tar[1137]: ./loopback Oct 2 19:56:03.558124 tar[1137]: ./dhcp Oct 2 19:56:03.622905 tar[1137]: ./ptp Oct 2 19:56:03.651257 tar[1137]: ./ipvlan Oct 2 19:56:03.678630 tar[1137]: ./bandwidth Oct 2 19:56:03.717609 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:56:03.953488 systemd-networkd[1048]: eth0: Gained IPv6LL Oct 2 19:56:04.265741 sshd_keygen[1139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:56:04.287004 systemd[1]: Finished sshd-keygen.service. Oct 2 19:56:04.289197 systemd[1]: Starting issuegen.service... Oct 2 19:56:04.294776 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:56:04.294941 systemd[1]: Finished issuegen.service. Oct 2 19:56:04.297028 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:56:04.305625 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:56:04.307781 systemd[1]: Started getty@tty1.service. Oct 2 19:56:04.309583 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:56:04.310393 systemd[1]: Reached target getty.target. Oct 2 19:56:04.311163 systemd[1]: Reached target multi-user.target. Oct 2 19:56:04.313166 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:56:04.325267 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:56:04.325442 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:56:04.326243 systemd[1]: Startup finished in 655ms (kernel) + 5.198s (initrd) + 4.533s (userspace) = 10.387s. Oct 2 19:56:06.191459 systemd[1]: Created slice system-sshd.slice. Oct 2 19:56:06.192641 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:43596.service. Oct 2 19:56:06.258379 sshd[1202]: Accepted publickey for core from 10.0.0.1 port 43596 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:06.261309 sshd[1202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.270849 systemd[1]: Created slice user-500.slice. Oct 2 19:56:06.272026 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:56:06.273583 systemd-logind[1130]: New session 1 of user core. Oct 2 19:56:06.282965 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:56:06.284432 systemd[1]: Starting user@500.service... Oct 2 19:56:06.288779 (systemd)[1205]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.359688 systemd[1205]: Queued start job for default target default.target. Oct 2 19:56:06.360691 systemd[1205]: Reached target paths.target. Oct 2 19:56:06.360711 systemd[1205]: Reached target sockets.target. Oct 2 19:56:06.360723 systemd[1205]: Reached target timers.target. Oct 2 19:56:06.360733 systemd[1205]: Reached target basic.target. Oct 2 19:56:06.360784 systemd[1205]: Reached target default.target. Oct 2 19:56:06.360809 systemd[1205]: Startup finished in 65ms. Oct 2 19:56:06.360859 systemd[1]: Started user@500.service. Oct 2 19:56:06.361776 systemd[1]: Started session-1.scope. Oct 2 19:56:06.413028 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:43598.service. Oct 2 19:56:06.456988 sshd[1214]: Accepted publickey for core from 10.0.0.1 port 43598 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:06.458580 sshd[1214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.462172 systemd-logind[1130]: New session 2 of user core. Oct 2 19:56:06.463071 systemd[1]: Started session-2.scope. Oct 2 19:56:06.525908 sshd[1214]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:06.530242 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:43606.service. Oct 2 19:56:06.531058 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:43598.service: Deactivated successfully. Oct 2 19:56:06.531937 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:56:06.532543 systemd-logind[1130]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:56:06.533560 systemd-logind[1130]: Removed session 2. Oct 2 19:56:06.568762 sshd[1219]: Accepted publickey for core from 10.0.0.1 port 43606 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:06.570563 sshd[1219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.574512 systemd-logind[1130]: New session 3 of user core. Oct 2 19:56:06.575407 systemd[1]: Started session-3.scope. Oct 2 19:56:06.626756 sshd[1219]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:06.630996 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:43606.service: Deactivated successfully. Oct 2 19:56:06.631615 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:56:06.633242 systemd-logind[1130]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:56:06.633249 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:43610.service. Oct 2 19:56:06.634059 systemd-logind[1130]: Removed session 3. Oct 2 19:56:06.670302 sshd[1226]: Accepted publickey for core from 10.0.0.1 port 43610 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:06.672229 sshd[1226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.676057 systemd-logind[1130]: New session 4 of user core. Oct 2 19:56:06.676931 systemd[1]: Started session-4.scope. Oct 2 19:56:06.733015 sshd[1226]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:06.737158 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:43624.service. Oct 2 19:56:06.737635 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:43610.service: Deactivated successfully. Oct 2 19:56:06.738415 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:56:06.738983 systemd-logind[1130]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:56:06.739936 systemd-logind[1130]: Removed session 4. Oct 2 19:56:06.774086 sshd[1231]: Accepted publickey for core from 10.0.0.1 port 43624 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:06.775604 sshd[1231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.779342 systemd-logind[1130]: New session 5 of user core. Oct 2 19:56:06.780006 systemd[1]: Started session-5.scope. Oct 2 19:56:06.848191 sudo[1236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:56:06.848418 sudo[1236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:06.865573 dbus-daemon[1118]: avc: received setenforce notice (enforcing=1) Oct 2 19:56:06.867579 sudo[1236]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:06.870074 sshd[1231]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:06.874118 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:43624.service: Deactivated successfully. Oct 2 19:56:06.874811 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:56:06.875460 systemd-logind[1130]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:56:06.876859 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:43636.service. Oct 2 19:56:06.877878 systemd-logind[1130]: Removed session 5. Oct 2 19:56:06.913411 sshd[1240]: Accepted publickey for core from 10.0.0.1 port 43636 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:06.915040 sshd[1240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:06.918658 systemd-logind[1130]: New session 6 of user core. Oct 2 19:56:06.919463 systemd[1]: Started session-6.scope. Oct 2 19:56:06.974095 sudo[1244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:56:06.974322 sudo[1244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:06.977225 sudo[1244]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:06.982833 sudo[1243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:56:06.983068 sudo[1243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:06.993056 systemd[1]: Stopping audit-rules.service... Oct 2 19:56:06.993000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:06.994953 auditctl[1247]: No rules Oct 2 19:56:06.995717 kernel: kauditd_printk_skb: 115 callbacks suppressed Oct 2 19:56:06.995859 kernel: audit: type=1305 audit(1696276566.993:155): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:56:06.995144 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:56:06.995310 systemd[1]: Stopped audit-rules.service. Oct 2 19:56:06.993000 audit[1247]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7c2f850 a2=420 a3=0 items=0 ppid=1 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.997466 systemd[1]: Starting audit-rules.service... Oct 2 19:56:06.999038 kernel: audit: type=1300 audit(1696276566.993:155): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7c2f850 a2=420 a3=0 items=0 ppid=1 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:06.999099 kernel: audit: type=1327 audit(1696276566.993:155): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:06.993000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:56:06.999853 kernel: audit: type=1131 audit(1696276566.994:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:06.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.020418 augenrules[1264]: No rules Oct 2 19:56:07.021472 systemd[1]: Finished audit-rules.service. Oct 2 19:56:07.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.022700 sudo[1243]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:07.021000 audit[1243]: USER_END pid=1243 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.024776 sshd[1240]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:07.026255 kernel: audit: type=1130 audit(1696276567.020:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.026344 kernel: audit: type=1106 audit(1696276567.021:158): pid=1243 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.026367 kernel: audit: type=1104 audit(1696276567.021:159): pid=1243 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.021000 audit[1243]: CRED_DISP pid=1243 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.024000 audit[1240]: USER_END pid=1240 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.028448 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:43644.service. Oct 2 19:56:07.028963 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:43636.service: Deactivated successfully. Oct 2 19:56:07.029566 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:56:07.030238 systemd-logind[1130]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:56:07.030817 kernel: audit: type=1106 audit(1696276567.024:160): pid=1240 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.030860 kernel: audit: type=1104 audit(1696276567.024:161): pid=1240 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.024000 audit[1240]: CRED_DISP pid=1240 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.031406 systemd-logind[1130]: Removed session 6. Oct 2 19:56:07.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.034716 kernel: audit: type=1130 audit(1696276567.027:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.12:22-10.0.0.1:43636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.064000 audit[1269]: USER_ACCT pid=1269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.065681 sshd[1269]: Accepted publickey for core from 10.0.0.1 port 43644 ssh2: RSA SHA256:HYZQRhxVAAt6Gcr+zBcKZEn/OixtikngqD7jOIOR0c8 Oct 2 19:56:07.065000 audit[1269]: CRED_ACQ pid=1269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.065000 audit[1269]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2f20880 a2=3 a3=1 items=0 ppid=1 pid=1269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:07.065000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:56:07.066533 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:56:07.070136 systemd-logind[1130]: New session 7 of user core. Oct 2 19:56:07.070793 systemd[1]: Started session-7.scope. Oct 2 19:56:07.072000 audit[1269]: USER_START pid=1269 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.074000 audit[1272]: CRED_ACQ pid=1272 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:07.123000 audit[1273]: USER_ACCT pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.124728 sudo[1273]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:56:07.124931 sudo[1273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:56:07.123000 audit[1273]: CRED_REFR pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.126000 audit[1273]: USER_START pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.649206 systemd[1]: Reloading. Oct 2 19:56:07.712681 /usr/lib/systemd/system-generators/torcx-generator[1303]: time="2023-10-02T19:56:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:07.713165 /usr/lib/systemd/system-generators/torcx-generator[1303]: time="2023-10-02T19:56:07Z" level=info msg="torcx already run" Oct 2 19:56:07.773591 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:07.773610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:07.789884 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit: BPF prog-id=31 op=LOAD Oct 2 19:56:07.840000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit: BPF prog-id=32 op=LOAD Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit: BPF prog-id=33 op=LOAD Oct 2 19:56:07.840000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:56:07.840000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.840000 audit: BPF prog-id=34 op=LOAD Oct 2 19:56:07.840000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit: BPF prog-id=35 op=LOAD Oct 2 19:56:07.841000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit: BPF prog-id=36 op=LOAD Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.841000 audit: BPF prog-id=37 op=LOAD Oct 2 19:56:07.841000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:56:07.841000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit: BPF prog-id=38 op=LOAD Oct 2 19:56:07.844000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit: BPF prog-id=39 op=LOAD Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit: BPF prog-id=40 op=LOAD Oct 2 19:56:07.844000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:56:07.844000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.844000 audit: BPF prog-id=41 op=LOAD Oct 2 19:56:07.844000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.845000 audit: BPF prog-id=42 op=LOAD Oct 2 19:56:07.845000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit: BPF prog-id=43 op=LOAD Oct 2 19:56:07.846000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit: BPF prog-id=44 op=LOAD Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:07.846000 audit: BPF prog-id=45 op=LOAD Oct 2 19:56:07.846000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:56:07.846000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:56:07.853576 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:56:07.860053 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:56:07.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.860631 systemd[1]: Reached target network-online.target. Oct 2 19:56:07.862504 systemd[1]: Started kubelet.service. Oct 2 19:56:07.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.878658 systemd[1]: Starting coreos-metadata.service... Oct 2 19:56:07.887812 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:56:07.887986 systemd[1]: Finished coreos-metadata.service. Oct 2 19:56:07.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:07.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:08.050209 kubelet[1341]: E1002 19:56:08.050012 1341 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Oct 2 19:56:08.053749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:56:08.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:56:08.053874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:56:08.185329 systemd[1]: Stopped kubelet.service. Oct 2 19:56:08.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:08.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:08.199843 systemd[1]: Reloading. Oct 2 19:56:08.253551 /usr/lib/systemd/system-generators/torcx-generator[1407]: time="2023-10-02T19:56:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:56:08.253578 /usr/lib/systemd/system-generators/torcx-generator[1407]: time="2023-10-02T19:56:08Z" level=info msg="torcx already run" Oct 2 19:56:08.316855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:56:08.317237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:56:08.333896 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.387000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit: BPF prog-id=46 op=LOAD Oct 2 19:56:08.388000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit: BPF prog-id=47 op=LOAD Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit: BPF prog-id=48 op=LOAD Oct 2 19:56:08.388000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:56:08.388000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.388000 audit: BPF prog-id=49 op=LOAD Oct 2 19:56:08.388000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit: BPF prog-id=50 op=LOAD Oct 2 19:56:08.389000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit: BPF prog-id=51 op=LOAD Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.389000 audit: BPF prog-id=52 op=LOAD Oct 2 19:56:08.389000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:56:08.389000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit: BPF prog-id=53 op=LOAD Oct 2 19:56:08.391000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit: BPF prog-id=54 op=LOAD Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.391000 audit: BPF prog-id=55 op=LOAD Oct 2 19:56:08.391000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:56:08.391000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit: BPF prog-id=56 op=LOAD Oct 2 19:56:08.392000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.393000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.393000 audit: BPF prog-id=57 op=LOAD Oct 2 19:56:08.393000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit: BPF prog-id=58 op=LOAD Oct 2 19:56:08.394000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit: BPF prog-id=59 op=LOAD Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:08.394000 audit: BPF prog-id=60 op=LOAD Oct 2 19:56:08.394000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:56:08.394000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:56:08.406611 systemd[1]: Started kubelet.service. Oct 2 19:56:08.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:08.460932 kubelet[1445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:08.460932 kubelet[1445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:08.461249 kubelet[1445]: I1002 19:56:08.461026 1445 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:56:08.462243 kubelet[1445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:56:08.462243 kubelet[1445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:56:09.201804 kubelet[1445]: I1002 19:56:09.201765 1445 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Oct 2 19:56:09.201804 kubelet[1445]: I1002 19:56:09.201794 1445 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:56:09.201997 kubelet[1445]: I1002 19:56:09.201982 1445 server.go:836] "Client rotation is on, will bootstrap in background" Oct 2 19:56:09.206841 kubelet[1445]: I1002 19:56:09.206812 1445 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:56:09.209386 kubelet[1445]: W1002 19:56:09.209363 1445 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:56:09.210193 kubelet[1445]: I1002 19:56:09.210172 1445 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:56:09.210567 kubelet[1445]: I1002 19:56:09.210549 1445 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:56:09.210631 kubelet[1445]: I1002 19:56:09.210618 1445 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:56:09.210776 kubelet[1445]: I1002 19:56:09.210705 1445 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:56:09.210776 kubelet[1445]: I1002 19:56:09.210716 1445 container_manager_linux.go:308] "Creating device plugin manager" Oct 2 19:56:09.210917 kubelet[1445]: I1002 19:56:09.210893 1445 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:09.215334 kubelet[1445]: I1002 19:56:09.215313 1445 kubelet.go:398] "Attempting to sync node with API server" Oct 2 19:56:09.215334 kubelet[1445]: I1002 19:56:09.215334 1445 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:56:09.215558 kubelet[1445]: I1002 19:56:09.215548 1445 kubelet.go:297] "Adding apiserver pod source" Oct 2 19:56:09.215593 kubelet[1445]: I1002 19:56:09.215562 1445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:56:09.215892 kubelet[1445]: E1002 19:56:09.215868 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:09.215953 kubelet[1445]: E1002 19:56:09.215897 1445 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:09.216662 kubelet[1445]: I1002 19:56:09.216644 1445 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:56:09.217565 kubelet[1445]: W1002 19:56:09.217551 1445 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:56:09.218085 kubelet[1445]: I1002 19:56:09.218051 1445 server.go:1186] "Started kubelet" Oct 2 19:56:09.218207 kubelet[1445]: I1002 19:56:09.218187 1445 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:56:09.218896 kubelet[1445]: I1002 19:56:09.218871 1445 server.go:451] "Adding debug handlers to kubelet server" Oct 2 19:56:09.219127 kubelet[1445]: E1002 19:56:09.219111 1445 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:56:09.219224 kubelet[1445]: E1002 19:56:09.219212 1445 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:56:09.220000 audit[1445]: AVC avc: denied { mac_admin } for pid=1445 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:09.220000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:09.220000 audit[1445]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b76480 a1=400114a918 a2=4000b76450 a3=25 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.220000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:09.220000 audit[1445]: AVC avc: denied { mac_admin } for pid=1445 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:09.220000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:09.220000 audit[1445]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000244560 a1=400114a930 a2=4000b76510 a3=25 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.220000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:09.221076 kubelet[1445]: I1002 19:56:09.220757 1445 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:56:09.221076 kubelet[1445]: I1002 19:56:09.220790 1445 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:56:09.221076 kubelet[1445]: I1002 19:56:09.220844 1445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:56:09.221380 kubelet[1445]: I1002 19:56:09.221364 1445 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:56:09.222213 kubelet[1445]: E1002 19:56:09.222190 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:09.223192 kubelet[1445]: I1002 19:56:09.223174 1445 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:56:09.226806 kubelet[1445]: W1002 19:56:09.226725 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:09.226873 kubelet[1445]: E1002 19:56:09.226812 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:09.238144 kubelet[1445]: E1002 19:56:09.238040 1445 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:09.238342 kubelet[1445]: W1002 19:56:09.238304 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:09.238342 kubelet[1445]: E1002 19:56:09.238330 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:09.238396 kubelet[1445]: W1002 19:56:09.238366 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:09.238396 kubelet[1445]: E1002 19:56:09.238377 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:09.238514 kubelet[1445]: E1002 19:56:09.238404 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6d0efd02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 218022658, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 218022658, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.239913 kubelet[1445]: E1002 19:56:09.239837 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6d20b8d7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 219184855, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 219184855, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.245469 kubelet[1445]: I1002 19:56:09.245447 1445 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:56:09.245469 kubelet[1445]: I1002 19:56:09.245462 1445 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:56:09.245601 kubelet[1445]: I1002 19:56:09.245485 1445 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:56:09.246114 kubelet[1445]: E1002 19:56:09.245803 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.247177 kubelet[1445]: E1002 19:56:09.247119 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.248142 kubelet[1445]: E1002 19:56:09.248080 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.248000 audit[1459]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.248000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff7a33e40 a2=0 a3=1 items=0 ppid=1445 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:09.250000 audit[1464]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.250000 audit[1464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffd87ae3d0 a2=0 a3=1 items=0 ppid=1445 pid=1464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:09.264835 kubelet[1445]: I1002 19:56:09.264793 1445 policy_none.go:49] "None policy: Start" Oct 2 19:56:09.265567 kubelet[1445]: I1002 19:56:09.265545 1445 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:56:09.265621 kubelet[1445]: I1002 19:56:09.265578 1445 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:56:09.271037 systemd[1]: Created slice kubepods.slice. Oct 2 19:56:09.252000 audit[1466]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.252000 audit[1466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff4010be0 a2=0 a3=1 items=0 ppid=1445 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:09.274847 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:56:09.277319 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:56:09.278000 audit[1471]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.278000 audit[1471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc5b99eb0 a2=0 a3=1 items=0 ppid=1445 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:56:09.284048 kubelet[1445]: I1002 19:56:09.284017 1445 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:56:09.283000 audit[1445]: AVC avc: denied { mac_admin } for pid=1445 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:09.283000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:56:09.283000 audit[1445]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ee0b10 a1=4000242540 a2=4000ee0ae0 a3=25 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.283000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:56:09.284296 kubelet[1445]: I1002 19:56:09.284084 1445 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:56:09.284296 kubelet[1445]: I1002 19:56:09.284251 1445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:56:09.286582 kubelet[1445]: E1002 19:56:09.285244 1445 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:56:09.286582 kubelet[1445]: E1002 19:56:09.286465 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d71125ac0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 285352128, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 285352128, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.312000 audit[1476]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.312000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcdc49640 a2=0 a3=1 items=0 ppid=1445 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:56:09.315000 audit[1477]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.315000 audit[1477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc07f0770 a2=0 a3=1 items=0 ppid=1445 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:56:09.320000 audit[1480]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.320000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe5d30390 a2=0 a3=1 items=0 ppid=1445 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:56:09.323344 kubelet[1445]: I1002 19:56:09.323313 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:09.325490 kubelet[1445]: E1002 19:56:09.325456 1445 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:56:09.325651 kubelet[1445]: E1002 19:56:09.325461 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 323239025, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea44720" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.325000 audit[1483]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.325000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff39513a0 a2=0 a3=1 items=0 ppid=1445 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:56:09.327341 kubelet[1445]: E1002 19:56:09.327264 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 323250684, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea4736b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.328000 audit[1484]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.328000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd0c3f690 a2=0 a3=1 items=0 ppid=1445 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:56:09.328872 kubelet[1445]: E1002 19:56:09.328441 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 323255116, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea481c4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.329000 audit[1485]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.329000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd8b0bc0 a2=0 a3=1 items=0 ppid=1445 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:09.331000 audit[1487]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.331000 audit[1487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe3197360 a2=0 a3=1 items=0 ppid=1445 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:56:09.334000 audit[1489]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.334000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffccb2efc0 a2=0 a3=1 items=0 ppid=1445 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:09.360000 audit[1492]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.360000 audit[1492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff9d3b910 a2=0 a3=1 items=0 ppid=1445 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:56:09.363000 audit[1494]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.363000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffd914bac0 a2=0 a3=1 items=0 ppid=1445 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.363000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:56:09.370000 audit[1497]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.370000 audit[1497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffea10a480 a2=0 a3=1 items=0 ppid=1445 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:56:09.371923 kubelet[1445]: I1002 19:56:09.371899 1445 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:56:09.371000 audit[1498]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.371000 audit[1498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcb7c34e0 a2=0 a3=1 items=0 ppid=1445 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:56:09.371000 audit[1499]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.371000 audit[1499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2d857a0 a2=0 a3=1 items=0 ppid=1445 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:09.372000 audit[1500]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.372000 audit[1500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff4f084e0 a2=0 a3=1 items=0 ppid=1445 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:56:09.373000 audit[1501]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.373000 audit[1501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc93aa4e0 a2=0 a3=1 items=0 ppid=1445 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:09.374000 audit[1502]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:09.374000 audit[1502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1540520 a2=0 a3=1 items=0 ppid=1445 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:09.375000 audit[1504]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.375000 audit[1504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc2416810 a2=0 a3=1 items=0 ppid=1445 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:56:09.376000 audit[1505]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.376000 audit[1505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe97a2020 a2=0 a3=1 items=0 ppid=1445 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:56:09.379000 audit[1507]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.379000 audit[1507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffd2b46b0 a2=0 a3=1 items=0 ppid=1445 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.379000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:56:09.380000 audit[1508]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.380000 audit[1508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc8fe3c40 a2=0 a3=1 items=0 ppid=1445 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:56:09.381000 audit[1509]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.381000 audit[1509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7a264c0 a2=0 a3=1 items=0 ppid=1445 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:56:09.383000 audit[1511]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.383000 audit[1511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcc29cac0 a2=0 a3=1 items=0 ppid=1445 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:56:09.386000 audit[1513]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.386000 audit[1513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff2e09990 a2=0 a3=1 items=0 ppid=1445 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:56:09.389000 audit[1515]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.389000 audit[1515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffce49d580 a2=0 a3=1 items=0 ppid=1445 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:56:09.391000 audit[1517]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.391000 audit[1517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffd2d4bb40 a2=0 a3=1 items=0 ppid=1445 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:56:09.396000 audit[1519]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.396000 audit[1519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffcb5e06d0 a2=0 a3=1 items=0 ppid=1445 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:56:09.397990 kubelet[1445]: I1002 19:56:09.397915 1445 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:56:09.397990 kubelet[1445]: I1002 19:56:09.397944 1445 status_manager.go:176] "Starting to sync pod status with apiserver" Oct 2 19:56:09.397990 kubelet[1445]: I1002 19:56:09.397963 1445 kubelet.go:2113] "Starting kubelet main sync loop" Oct 2 19:56:09.398100 kubelet[1445]: E1002 19:56:09.398025 1445 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:56:09.398000 audit[1520]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.398000 audit[1520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff5cb1350 a2=0 a3=1 items=0 ppid=1445 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.398000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:56:09.400055 kubelet[1445]: W1002 19:56:09.400007 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:09.400055 kubelet[1445]: E1002 19:56:09.400045 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:09.399000 audit[1521]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.399000 audit[1521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde97f900 a2=0 a3=1 items=0 ppid=1445 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:56:09.400000 audit[1522]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:09.400000 audit[1522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1206480 a2=0 a3=1 items=0 ppid=1445 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:09.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:56:09.440650 kubelet[1445]: E1002 19:56:09.440593 1445 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:09.526975 kubelet[1445]: I1002 19:56:09.526850 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:09.532720 kubelet[1445]: E1002 19:56:09.532691 1445 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:56:09.533568 kubelet[1445]: E1002 19:56:09.533468 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 526796514, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea44720" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.534729 kubelet[1445]: E1002 19:56:09.534657 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 526810089, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea4736b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.621620 kubelet[1445]: E1002 19:56:09.621529 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 526813323, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea481c4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:09.842439 kubelet[1445]: E1002 19:56:09.842329 1445 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:09.933635 kubelet[1445]: I1002 19:56:09.933595 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:09.934835 kubelet[1445]: E1002 19:56:09.934806 1445 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:56:09.934835 kubelet[1445]: E1002 19:56:09.934762 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 933561104, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea44720" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.021639 kubelet[1445]: E1002 19:56:10.021514 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 933567372, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea4736b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.216366 kubelet[1445]: E1002 19:56:10.216195 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:10.222356 kubelet[1445]: E1002 19:56:10.222238 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 933570167, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea481c4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.286537 kubelet[1445]: W1002 19:56:10.286486 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:10.286537 kubelet[1445]: E1002 19:56:10.286522 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:10.350426 kubelet[1445]: W1002 19:56:10.350370 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:10.350426 kubelet[1445]: E1002 19:56:10.350403 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:10.434301 kubelet[1445]: W1002 19:56:10.434228 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:10.434301 kubelet[1445]: E1002 19:56:10.434262 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:10.563340 kubelet[1445]: W1002 19:56:10.563212 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:10.563340 kubelet[1445]: E1002 19:56:10.563249 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:10.644068 kubelet[1445]: E1002 19:56:10.644015 1445 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:10.736357 kubelet[1445]: I1002 19:56:10.736319 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:10.737611 kubelet[1445]: E1002 19:56:10.737587 1445 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:56:10.737720 kubelet[1445]: E1002 19:56:10.737643 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 10, 736263460, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea44720" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.738610 kubelet[1445]: E1002 19:56:10.738546 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 10, 736268691, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea4736b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:10.821324 kubelet[1445]: E1002 19:56:10.821068 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 10, 736271287, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea481c4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:11.217101 kubelet[1445]: E1002 19:56:11.217004 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:12.217938 kubelet[1445]: E1002 19:56:12.217877 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:12.238637 kubelet[1445]: W1002 19:56:12.238588 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:12.238637 kubelet[1445]: E1002 19:56:12.238620 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:12.245770 kubelet[1445]: E1002 19:56:12.245738 1445 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:12.339016 kubelet[1445]: I1002 19:56:12.338978 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:12.340528 kubelet[1445]: E1002 19:56:12.340449 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 12, 338937261, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea44720" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:12.340727 kubelet[1445]: E1002 19:56:12.340702 1445 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:56:12.341339 kubelet[1445]: E1002 19:56:12.341259 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 12, 338949282, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea4736b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:12.342207 kubelet[1445]: E1002 19:56:12.342141 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 12, 338952078, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea481c4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:12.830529 kubelet[1445]: W1002 19:56:12.830499 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:12.830684 kubelet[1445]: E1002 19:56:12.830673 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:12.918951 kubelet[1445]: W1002 19:56:12.918903 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:12.919104 kubelet[1445]: E1002 19:56:12.919093 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:13.177816 kubelet[1445]: W1002 19:56:13.177719 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:13.177974 kubelet[1445]: E1002 19:56:13.177962 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:13.218100 kubelet[1445]: E1002 19:56:13.218070 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:14.219007 kubelet[1445]: E1002 19:56:14.218971 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:15.220134 kubelet[1445]: E1002 19:56:15.220058 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:15.447837 kubelet[1445]: E1002 19:56:15.447782 1445 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:56:15.541857 kubelet[1445]: I1002 19:56:15.541766 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:15.542770 kubelet[1445]: E1002 19:56:15.542751 1445 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:56:15.543105 kubelet[1445]: E1002 19:56:15.543034 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea44720", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244583712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 15, 541721430, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea44720" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:15.544214 kubelet[1445]: E1002 19:56:15.544147 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea4736b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244595051, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 15, 541732296, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea4736b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:15.545132 kubelet[1445]: E1002 19:56:15.545066 1445 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a628d6ea481c4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 56, 9, 244598724, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 56, 15, 541735173, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a628d6ea481c4" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:56:16.221244 kubelet[1445]: E1002 19:56:16.221205 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:16.777480 kubelet[1445]: W1002 19:56:16.777443 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:16.777630 kubelet[1445]: E1002 19:56:16.777512 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:56:17.221434 kubelet[1445]: E1002 19:56:17.221310 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:17.580454 kubelet[1445]: W1002 19:56:17.580355 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:17.580454 kubelet[1445]: E1002 19:56:17.580389 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:56:17.982615 kubelet[1445]: W1002 19:56:17.982513 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:17.982615 kubelet[1445]: E1002 19:56:17.982547 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:56:18.221800 kubelet[1445]: E1002 19:56:18.221761 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:18.405797 kubelet[1445]: W1002 19:56:18.405703 1445 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:18.405962 kubelet[1445]: E1002 19:56:18.405950 1445 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:56:19.206187 kubelet[1445]: I1002 19:56:19.206140 1445 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:56:19.222470 kubelet[1445]: E1002 19:56:19.222442 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:19.286141 kubelet[1445]: E1002 19:56:19.286113 1445 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:56:19.604725 kubelet[1445]: E1002 19:56:19.604599 1445 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:56:20.223737 kubelet[1445]: E1002 19:56:20.223678 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:20.643063 kubelet[1445]: E1002 19:56:20.643007 1445 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:56:21.224471 kubelet[1445]: E1002 19:56:21.224432 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:21.859991 kubelet[1445]: E1002 19:56:21.859957 1445 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.12\" not found" node="10.0.0.12" Oct 2 19:56:21.943681 kubelet[1445]: I1002 19:56:21.943649 1445 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:56:22.043504 kubelet[1445]: I1002 19:56:22.043471 1445 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.12" Oct 2 19:56:22.054384 kubelet[1445]: E1002 19:56:22.054344 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.154887 kubelet[1445]: E1002 19:56:22.154726 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.225560 kubelet[1445]: E1002 19:56:22.225486 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:22.255174 kubelet[1445]: E1002 19:56:22.255126 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.356046 kubelet[1445]: E1002 19:56:22.355987 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.420800 sudo[1273]: pam_unix(sudo:session): session closed for user root Oct 2 19:56:22.422578 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:56:22.422619 kernel: audit: type=1106 audit(1696276582.419:560): pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.419000 audit[1273]: USER_END pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.426175 kernel: audit: type=1104 audit(1696276582.419:561): pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.419000 audit[1273]: CRED_DISP pid=1273 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.425725 sshd[1269]: pam_unix(sshd:session): session closed for user core Oct 2 19:56:22.425000 audit[1269]: USER_END pid=1269 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:22.425000 audit[1269]: CRED_DISP pid=1269 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:22.428100 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:43644.service: Deactivated successfully. Oct 2 19:56:22.428858 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:56:22.432832 kernel: audit: type=1106 audit(1696276582.425:562): pid=1269 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:22.432916 kernel: audit: type=1104 audit(1696276582.425:563): pid=1269 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:56:22.432936 kernel: audit: type=1131 audit(1696276582.427:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:43644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:56:22.430953 systemd-logind[1130]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:56:22.431645 systemd-logind[1130]: Removed session 7. Oct 2 19:56:22.456540 kubelet[1445]: E1002 19:56:22.456489 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.557433 kubelet[1445]: E1002 19:56:22.557375 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.658329 kubelet[1445]: E1002 19:56:22.658274 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.759206 kubelet[1445]: E1002 19:56:22.759092 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.859669 kubelet[1445]: E1002 19:56:22.859624 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:22.960228 kubelet[1445]: E1002 19:56:22.960168 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.060976 kubelet[1445]: E1002 19:56:23.060849 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.161495 kubelet[1445]: E1002 19:56:23.161436 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.226354 kubelet[1445]: E1002 19:56:23.226284 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:23.262572 kubelet[1445]: E1002 19:56:23.262518 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.363435 kubelet[1445]: E1002 19:56:23.363379 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.464051 kubelet[1445]: E1002 19:56:23.463985 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.564598 kubelet[1445]: E1002 19:56:23.564535 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.665318 kubelet[1445]: E1002 19:56:23.665159 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.765973 kubelet[1445]: E1002 19:56:23.765919 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.866522 kubelet[1445]: E1002 19:56:23.866466 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:23.967253 kubelet[1445]: E1002 19:56:23.967106 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.067657 kubelet[1445]: E1002 19:56:24.067606 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.168341 kubelet[1445]: E1002 19:56:24.168295 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.227255 kubelet[1445]: E1002 19:56:24.227144 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:24.269308 kubelet[1445]: E1002 19:56:24.269250 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.369698 kubelet[1445]: E1002 19:56:24.369645 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.470167 kubelet[1445]: E1002 19:56:24.470107 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.570465 kubelet[1445]: E1002 19:56:24.570364 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.671042 kubelet[1445]: E1002 19:56:24.670986 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.771708 kubelet[1445]: E1002 19:56:24.771640 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.872360 kubelet[1445]: E1002 19:56:24.872297 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:24.972848 kubelet[1445]: E1002 19:56:24.972791 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.073453 kubelet[1445]: E1002 19:56:25.073408 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.175159 kubelet[1445]: E1002 19:56:25.174010 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.227914 kubelet[1445]: E1002 19:56:25.227867 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:25.275066 kubelet[1445]: E1002 19:56:25.275022 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.375742 kubelet[1445]: E1002 19:56:25.375694 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.476770 kubelet[1445]: E1002 19:56:25.476662 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.577403 kubelet[1445]: E1002 19:56:25.577353 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.678094 kubelet[1445]: E1002 19:56:25.678043 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.778959 kubelet[1445]: E1002 19:56:25.778841 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.879569 kubelet[1445]: E1002 19:56:25.879522 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:25.980226 kubelet[1445]: E1002 19:56:25.980174 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.080958 kubelet[1445]: E1002 19:56:26.080826 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.181574 kubelet[1445]: E1002 19:56:26.181517 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.228412 kubelet[1445]: E1002 19:56:26.228353 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:26.282618 kubelet[1445]: E1002 19:56:26.282575 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.383379 kubelet[1445]: E1002 19:56:26.383333 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.484156 kubelet[1445]: E1002 19:56:26.484108 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.584776 kubelet[1445]: E1002 19:56:26.584696 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.685471 kubelet[1445]: E1002 19:56:26.685346 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.786154 kubelet[1445]: E1002 19:56:26.786073 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.886675 kubelet[1445]: E1002 19:56:26.886611 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:26.987247 kubelet[1445]: E1002 19:56:26.987120 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.087835 kubelet[1445]: E1002 19:56:27.087768 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.188384 kubelet[1445]: E1002 19:56:27.188317 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.229188 kubelet[1445]: E1002 19:56:27.229133 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:27.289550 kubelet[1445]: E1002 19:56:27.289425 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.390450 kubelet[1445]: E1002 19:56:27.390399 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.491205 kubelet[1445]: E1002 19:56:27.491143 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.596772 kubelet[1445]: E1002 19:56:27.591806 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.692604 kubelet[1445]: E1002 19:56:27.692526 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.793430 kubelet[1445]: E1002 19:56:27.793366 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.894054 kubelet[1445]: E1002 19:56:27.893999 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:27.995145 kubelet[1445]: E1002 19:56:27.995077 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.095764 kubelet[1445]: E1002 19:56:28.095705 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.196536 kubelet[1445]: E1002 19:56:28.196400 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.229329 kubelet[1445]: E1002 19:56:28.229236 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:28.296536 kubelet[1445]: E1002 19:56:28.296485 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.397415 kubelet[1445]: E1002 19:56:28.397355 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.498167 kubelet[1445]: E1002 19:56:28.498058 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.598757 kubelet[1445]: E1002 19:56:28.598709 1445 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:56:28.700335 kubelet[1445]: I1002 19:56:28.700271 1445 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:56:28.700753 env[1141]: time="2023-10-02T19:56:28.700649109Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:56:28.701044 kubelet[1445]: I1002 19:56:28.700806 1445 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:56:29.216388 kubelet[1445]: E1002 19:56:29.216343 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:29.228711 kubelet[1445]: I1002 19:56:29.228630 1445 apiserver.go:52] "Watching apiserver" Oct 2 19:56:29.229727 kubelet[1445]: E1002 19:56:29.229709 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:29.231835 kubelet[1445]: I1002 19:56:29.231800 1445 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:56:29.231898 kubelet[1445]: I1002 19:56:29.231875 1445 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:56:29.236730 systemd[1]: Created slice kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice. Oct 2 19:56:29.250882 systemd[1]: Created slice kubepods-besteffort-podf2a6172e_a292_44f0_a384_847aa3d4f1c5.slice. Oct 2 19:56:29.324847 kubelet[1445]: I1002 19:56:29.324811 1445 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:56:29.336275 kubelet[1445]: I1002 19:56:29.336233 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a6172e-a292-44f0-a384-847aa3d4f1c5-xtables-lock\") pod \"kube-proxy-xg947\" (UID: \"f2a6172e-a292-44f0-a384-847aa3d4f1c5\") " pod="kube-system/kube-proxy-xg947" Oct 2 19:56:29.336275 kubelet[1445]: I1002 19:56:29.336288 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-config-path\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336451 kubelet[1445]: I1002 19:56:29.336316 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-net\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336451 kubelet[1445]: I1002 19:56:29.336336 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hubble-tls\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336451 kubelet[1445]: I1002 19:56:29.336363 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-xtables-lock\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336451 kubelet[1445]: I1002 19:56:29.336425 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2a6172e-a292-44f0-a384-847aa3d4f1c5-kube-proxy\") pod \"kube-proxy-xg947\" (UID: \"f2a6172e-a292-44f0-a384-847aa3d4f1c5\") " pod="kube-system/kube-proxy-xg947" Oct 2 19:56:29.336558 kubelet[1445]: I1002 19:56:29.336463 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wz6m\" (UniqueName: \"kubernetes.io/projected/f2a6172e-a292-44f0-a384-847aa3d4f1c5-kube-api-access-7wz6m\") pod \"kube-proxy-xg947\" (UID: \"f2a6172e-a292-44f0-a384-847aa3d4f1c5\") " pod="kube-system/kube-proxy-xg947" Oct 2 19:56:29.336558 kubelet[1445]: I1002 19:56:29.336491 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hostproc\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336558 kubelet[1445]: I1002 19:56:29.336513 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-cgroup\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336640 kubelet[1445]: I1002 19:56:29.336571 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cni-path\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336640 kubelet[1445]: I1002 19:56:29.336599 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9mb\" (UniqueName: \"kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-kube-api-access-jd9mb\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336692 kubelet[1445]: I1002 19:56:29.336650 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a6172e-a292-44f0-a384-847aa3d4f1c5-lib-modules\") pod \"kube-proxy-xg947\" (UID: \"f2a6172e-a292-44f0-a384-847aa3d4f1c5\") " pod="kube-system/kube-proxy-xg947" Oct 2 19:56:29.336692 kubelet[1445]: I1002 19:56:29.336676 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-run\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336739 kubelet[1445]: I1002 19:56:29.336697 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-bpf-maps\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336739 kubelet[1445]: I1002 19:56:29.336727 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-etc-cni-netd\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336787 kubelet[1445]: I1002 19:56:29.336767 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-kernel\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336813 kubelet[1445]: I1002 19:56:29.336789 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-lib-modules\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336843 kubelet[1445]: I1002 19:56:29.336818 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24e9887e-8f45-47a9-a855-9f7ea67bebf8-clustermesh-secrets\") pod \"cilium-wxnxh\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " pod="kube-system/cilium-wxnxh" Oct 2 19:56:29.336843 kubelet[1445]: I1002 19:56:29.336833 1445 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:56:29.848660 kubelet[1445]: E1002 19:56:29.848622 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:29.849367 env[1141]: time="2023-10-02T19:56:29.849325256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxnxh,Uid:24e9887e-8f45-47a9-a855-9f7ea67bebf8,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:29.868239 kubelet[1445]: E1002 19:56:29.868204 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:29.869014 env[1141]: time="2023-10-02T19:56:29.868710787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xg947,Uid:f2a6172e-a292-44f0-a384-847aa3d4f1c5,Namespace:kube-system,Attempt:0,}" Oct 2 19:56:30.230659 kubelet[1445]: E1002 19:56:30.230633 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:30.530456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312847982.mount: Deactivated successfully. Oct 2 19:56:30.536224 env[1141]: time="2023-10-02T19:56:30.536184021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.537352 env[1141]: time="2023-10-02T19:56:30.537323435Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.538772 env[1141]: time="2023-10-02T19:56:30.538739237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.539646 env[1141]: time="2023-10-02T19:56:30.539624933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.540363 env[1141]: time="2023-10-02T19:56:30.540333513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.543264 env[1141]: time="2023-10-02T19:56:30.543143686Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.544519 env[1141]: time="2023-10-02T19:56:30.544495479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.547010 env[1141]: time="2023-10-02T19:56:30.546980128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:30.576898 env[1141]: time="2023-10-02T19:56:30.576826786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:30.576898 env[1141]: time="2023-10-02T19:56:30.576869205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:30.576898 env[1141]: time="2023-10-02T19:56:30.576879960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:30.577350 env[1141]: time="2023-10-02T19:56:30.577303957Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/409a0964e8649f0f27e9966894e6aea5a35c94f4b1b231358dd8049cc9b9472c pid=1545 runtime=io.containerd.runc.v2 Oct 2 19:56:30.577550 env[1141]: time="2023-10-02T19:56:30.577299399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:56:30.577550 env[1141]: time="2023-10-02T19:56:30.577419582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:56:30.577550 env[1141]: time="2023-10-02T19:56:30.577429937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:56:30.577711 env[1141]: time="2023-10-02T19:56:30.577616287Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef pid=1546 runtime=io.containerd.runc.v2 Oct 2 19:56:30.604619 systemd[1]: Started cri-containerd-409a0964e8649f0f27e9966894e6aea5a35c94f4b1b231358dd8049cc9b9472c.scope. Oct 2 19:56:30.605866 systemd[1]: Started cri-containerd-914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef.scope. Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636369 kernel: audit: type=1400 audit(1696276590.632:565): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636436 kernel: audit: type=1400 audit(1696276590.632:566): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638980 kernel: audit: type=1400 audit(1696276590.632:567): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.639062 kernel: audit: type=1400 audit(1696276590.632:568): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.642947 kernel: audit: type=1400 audit(1696276590.632:569): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.643011 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:56:30.643038 kernel: audit: type=1400 audit(1696276590.632:570): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.645699 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:56:30.645752 kernel: audit: type=1400 audit(1696276590.632:571): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.647933 kernel: audit: backlog limit exceeded Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.632000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.634000 audit: BPF prog-id=61 op=LOAD Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396130393634653836343966306632376539393636383934653661 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396130393634653836343966306632376539393636383934653661 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.636000 audit: BPF prog-id=62 op=LOAD Oct 2 19:56:30.636000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396130393634653836343966306632376539393636383934653661 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.638000 audit: BPF prog-id=63 op=LOAD Oct 2 19:56:30.638000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396130393634653836343966306632376539393636383934653661 Oct 2 19:56:30.640000 audit: BPF prog-id=63 op=UNLOAD Oct 2 19:56:30.640000 audit: BPF prog-id=62 op=UNLOAD Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.640000 audit: BPF prog-id=64 op=LOAD Oct 2 19:56:30.640000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430396130393634653836343966306632376539393636383934653661 Oct 2 19:56:30.647000 audit: BPF prog-id=65 op=LOAD Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=1546 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632663633653432303839396566633230343862363766336335 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=1546 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632663633653432303839396566633230343862363766336335 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit: BPF prog-id=66 op=LOAD Oct 2 19:56:30.648000 audit[1567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=1546 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632663633653432303839396566633230343862363766336335 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit: BPF prog-id=67 op=LOAD Oct 2 19:56:30.648000 audit[1567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=1546 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632663633653432303839396566633230343862363766336335 Oct 2 19:56:30.648000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:56:30.648000 audit: BPF prog-id=66 op=UNLOAD Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { perfmon } for pid=1567 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit[1567]: AVC avc: denied { bpf } for pid=1567 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:30.648000 audit: BPF prog-id=68 op=LOAD Oct 2 19:56:30.648000 audit[1567]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=1546 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:30.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931346632663633653432303839396566633230343862363766336335 Oct 2 19:56:30.665266 env[1141]: time="2023-10-02T19:56:30.665221627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxnxh,Uid:24e9887e-8f45-47a9-a855-9f7ea67bebf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\"" Oct 2 19:56:30.665637 env[1141]: time="2023-10-02T19:56:30.665437844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xg947,Uid:f2a6172e-a292-44f0-a384-847aa3d4f1c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"409a0964e8649f0f27e9966894e6aea5a35c94f4b1b231358dd8049cc9b9472c\"" Oct 2 19:56:30.666506 kubelet[1445]: E1002 19:56:30.666315 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:30.666506 kubelet[1445]: E1002 19:56:30.666344 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:30.667481 env[1141]: time="2023-10-02T19:56:30.667452039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\"" Oct 2 19:56:31.231137 kubelet[1445]: E1002 19:56:31.231084 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:31.991740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101639338.mount: Deactivated successfully. Oct 2 19:56:32.231618 kubelet[1445]: E1002 19:56:32.231575 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:32.361691 env[1141]: time="2023-10-02T19:56:32.361631842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:32.362786 env[1141]: time="2023-10-02T19:56:32.362750811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:32.364331 env[1141]: time="2023-10-02T19:56:32.364288283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:32.365705 env[1141]: time="2023-10-02T19:56:32.365665024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\" returns image reference \"sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343\"" Oct 2 19:56:32.366322 env[1141]: time="2023-10-02T19:56:32.366273887Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d8c8e3e8fe630c3f2d84a22722d4891343196483ac4cc02c1ba9345b1bfc8a3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:32.367345 env[1141]: time="2023-10-02T19:56:32.367316928Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:56:32.368743 env[1141]: time="2023-10-02T19:56:32.368708382Z" level=info msg="CreateContainer within sandbox \"409a0964e8649f0f27e9966894e6aea5a35c94f4b1b231358dd8049cc9b9472c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:56:32.383283 env[1141]: time="2023-10-02T19:56:32.383227827Z" level=info msg="CreateContainer within sandbox \"409a0964e8649f0f27e9966894e6aea5a35c94f4b1b231358dd8049cc9b9472c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63204317c3d42cb64c7e4f094b6430dfb40d9d4d028862560bde646fe8d8894f\"" Oct 2 19:56:32.383955 env[1141]: time="2023-10-02T19:56:32.383912459Z" level=info msg="StartContainer for \"63204317c3d42cb64c7e4f094b6430dfb40d9d4d028862560bde646fe8d8894f\"" Oct 2 19:56:32.404417 systemd[1]: Started cri-containerd-63204317c3d42cb64c7e4f094b6430dfb40d9d4d028862560bde646fe8d8894f.scope. Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=1545 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633323034333137633364343263623634633765346630393462363433 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.440000 audit: BPF prog-id=69 op=LOAD Oct 2 19:56:32.440000 audit[1620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=1545 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633323034333137633364343263623634633765346630393462363433 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.441000 audit: BPF prog-id=70 op=LOAD Oct 2 19:56:32.441000 audit[1620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=1545 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633323034333137633364343263623634633765346630393462363433 Oct 2 19:56:32.443000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:56:32.443000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { perfmon } for pid=1620 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit[1620]: AVC avc: denied { bpf } for pid=1620 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:56:32.443000 audit: BPF prog-id=71 op=LOAD Oct 2 19:56:32.443000 audit[1620]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=1545 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633323034333137633364343263623634633765346630393462363433 Oct 2 19:56:32.471045 env[1141]: time="2023-10-02T19:56:32.470998423Z" level=info msg="StartContainer for \"63204317c3d42cb64c7e4f094b6430dfb40d9d4d028862560bde646fe8d8894f\" returns successfully" Oct 2 19:56:32.555000 audit[1670]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.555000 audit[1670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd76ee980 a2=0 a3=ffffaa22a6c0 items=0 ppid=1631 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.555000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:32.555000 audit[1671]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.555000 audit[1671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4d749b0 a2=0 a3=ffffb479b6c0 items=0 ppid=1631 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:56:32.557000 audit[1672]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.557000 audit[1672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef118aa0 a2=0 a3=ffffb3dce6c0 items=0 ppid=1631 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:32.559000 audit[1674]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.559000 audit[1674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2734460 a2=0 a3=ffffa53aa6c0 items=0 ppid=1631 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:56:32.560000 audit[1676]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.560000 audit[1676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1b45680 a2=0 a3=ffff859db6c0 items=0 ppid=1631 pid=1676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:32.560000 audit[1677]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.560000 audit[1677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff38610d0 a2=0 a3=ffffa39136c0 items=0 ppid=1631 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:56:32.663000 audit[1678]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.663000 audit[1678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd0809820 a2=0 a3=ffff9dcc56c0 items=0 ppid=1631 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:32.680000 audit[1680]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.680000 audit[1680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffef1b05d0 a2=0 a3=ffff9c5a06c0 items=0 ppid=1631 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:56:32.686000 audit[1683]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.686000 audit[1683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffecf30a40 a2=0 a3=ffffa1ba86c0 items=0 ppid=1631 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:56:32.690000 audit[1684]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.690000 audit[1684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde6dbe30 a2=0 a3=ffffacf386c0 items=0 ppid=1631 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:32.695000 audit[1686]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.695000 audit[1686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc6dab450 a2=0 a3=ffff9e1c06c0 items=0 ppid=1631 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:32.696000 audit[1687]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.696000 audit[1687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff529a20 a2=0 a3=ffff8e2f86c0 items=0 ppid=1631 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:32.698000 audit[1689]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.698000 audit[1689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff61dfb10 a2=0 a3=ffff969fd6c0 items=0 ppid=1631 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:32.702000 audit[1692]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.702000 audit[1692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0c06e60 a2=0 a3=ffffb0d026c0 items=0 ppid=1631 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:56:32.703000 audit[1693]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.703000 audit[1693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe2d13e0 a2=0 a3=ffff8ba506c0 items=0 ppid=1631 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:32.706000 audit[1695]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.706000 audit[1695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd618d5c0 a2=0 a3=ffff972476c0 items=0 ppid=1631 pid=1695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:32.707000 audit[1696]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.707000 audit[1696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6948520 a2=0 a3=ffff8a8226c0 items=0 ppid=1631 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:32.709000 audit[1698]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.709000 audit[1698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd3af5d90 a2=0 a3=ffffbabd86c0 items=0 ppid=1631 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:32.713000 audit[1701]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.713000 audit[1701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3f770f0 a2=0 a3=ffffb571b6c0 items=0 ppid=1631 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:32.717000 audit[1704]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.717000 audit[1704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4948970 a2=0 a3=ffff8acc86c0 items=0 ppid=1631 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:32.719000 audit[1705]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.719000 audit[1705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4cd7df0 a2=0 a3=ffff8bbae6c0 items=0 ppid=1631 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:32.721000 audit[1707]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.721000 audit[1707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd0dbef60 a2=0 a3=ffff9cdbe6c0 items=0 ppid=1631 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:32.725000 audit[1710]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:56:32.725000 audit[1710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff19db200 a2=0 a3=ffff8410c6c0 items=0 ppid=1631 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:32.769000 audit[1714]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:32.769000 audit[1714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffcffa10f0 a2=0 a3=ffffbf0626c0 items=0 ppid=1631 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:32.777000 audit[1714]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:56:32.777000 audit[1714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffcffa10f0 a2=0 a3=ffffbf0626c0 items=0 ppid=1631 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:32.795000 audit[1720]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1720 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.795000 audit[1720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc6e90cf0 a2=0 a3=ffff9c61a6c0 items=0 ppid=1631 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:56:32.797000 audit[1722]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1722 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.797000 audit[1722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffeb4e5e60 a2=0 a3=ffffba3b36c0 items=0 ppid=1631 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:56:32.801000 audit[1725]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.801000 audit[1725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdd548310 a2=0 a3=ffff9ef4a6c0 items=0 ppid=1631 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:56:32.802000 audit[1726]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1726 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.802000 audit[1726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9daa7e0 a2=0 a3=ffff828f76c0 items=0 ppid=1631 pid=1726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:56:32.804000 audit[1728]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.804000 audit[1728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc23fd560 a2=0 a3=ffffa42f36c0 items=0 ppid=1631 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:56:32.806000 audit[1729]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1729 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.806000 audit[1729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe50603b0 a2=0 a3=ffff9a4c76c0 items=0 ppid=1631 pid=1729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:56:32.808000 audit[1731]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1731 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.808000 audit[1731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe8dd9130 a2=0 a3=ffff99bbb6c0 items=0 ppid=1631 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:56:32.812000 audit[1734]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.812000 audit[1734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc07e72c0 a2=0 a3=ffff95dec6c0 items=0 ppid=1631 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:56:32.813000 audit[1735]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.813000 audit[1735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc69bcb00 a2=0 a3=ffffa779f6c0 items=0 ppid=1631 pid=1735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:56:32.816000 audit[1737]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.816000 audit[1737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd4c60700 a2=0 a3=ffffa8b586c0 items=0 ppid=1631 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:56:32.817000 audit[1738]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.817000 audit[1738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7a6e8f0 a2=0 a3=ffffba8ef6c0 items=0 ppid=1631 pid=1738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:56:32.820000 audit[1740]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.820000 audit[1740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffecd57930 a2=0 a3=ffffac8ed6c0 items=0 ppid=1631 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:56:32.823000 audit[1743]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.823000 audit[1743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff63b5440 a2=0 a3=ffffa8c2b6c0 items=0 ppid=1631 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:56:32.827000 audit[1746]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.827000 audit[1746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc5fc88d0 a2=0 a3=ffff950ce6c0 items=0 ppid=1631 pid=1746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:56:32.828000 audit[1747]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.828000 audit[1747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4270960 a2=0 a3=ffffb903c6c0 items=0 ppid=1631 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:56:32.830000 audit[1749]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.830000 audit[1749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc78ee5b0 a2=0 a3=ffff845c66c0 items=0 ppid=1631 pid=1749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:32.837000 audit[1752]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:56:32.837000 audit[1752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc0be4270 a2=0 a3=ffff9d84b6c0 items=0 ppid=1631 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:56:32.842000 audit[1756]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:32.842000 audit[1756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd7b56810 a2=0 a3=ffff8f51c6c0 items=0 ppid=1631 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.842000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:32.843000 audit[1756]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:56:32.843000 audit[1756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffd7b56810 a2=0 a3=ffff8f51c6c0 items=0 ppid=1631 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:56:32.843000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:56:33.232351 kubelet[1445]: E1002 19:56:33.232296 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:33.439916 kubelet[1445]: E1002 19:56:33.439888 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:34.232601 kubelet[1445]: E1002 19:56:34.232562 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:34.441232 kubelet[1445]: E1002 19:56:34.441202 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:35.232983 kubelet[1445]: E1002 19:56:35.232940 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:36.075479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883456723.mount: Deactivated successfully. Oct 2 19:56:36.233695 kubelet[1445]: E1002 19:56:36.233642 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:37.233784 kubelet[1445]: E1002 19:56:37.233741 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.233925 kubelet[1445]: E1002 19:56:38.233877 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:38.482874 env[1141]: time="2023-10-02T19:56:38.482816961Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:38.484210 env[1141]: time="2023-10-02T19:56:38.484020897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:38.485850 env[1141]: time="2023-10-02T19:56:38.485819662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:56:38.486517 env[1141]: time="2023-10-02T19:56:38.486487192Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:56:38.488116 env[1141]: time="2023-10-02T19:56:38.488086614Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:56:38.497943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698549204.mount: Deactivated successfully. Oct 2 19:56:38.501241 env[1141]: time="2023-10-02T19:56:38.501201985Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\"" Oct 2 19:56:38.501889 env[1141]: time="2023-10-02T19:56:38.501862476Z" level=info msg="StartContainer for \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\"" Oct 2 19:56:38.524469 systemd[1]: Started cri-containerd-ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f.scope. Oct 2 19:56:38.546518 systemd[1]: cri-containerd-ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f.scope: Deactivated successfully. Oct 2 19:56:38.771296 env[1141]: time="2023-10-02T19:56:38.770924034Z" level=info msg="shim disconnected" id=ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f Oct 2 19:56:38.771296 env[1141]: time="2023-10-02T19:56:38.770978979Z" level=warning msg="cleaning up after shim disconnected" id=ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f namespace=k8s.io Oct 2 19:56:38.771296 env[1141]: time="2023-10-02T19:56:38.770988256Z" level=info msg="cleaning up dead shim" Oct 2 19:56:38.781882 env[1141]: time="2023-10-02T19:56:38.781833276Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1782 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:38.782184 env[1141]: time="2023-10-02T19:56:38.782093401Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:56:38.782450 env[1141]: time="2023-10-02T19:56:38.782373041Z" level=error msg="Failed to pipe stdout of container \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\"" error="reading from a closed fifo" Oct 2 19:56:38.782995 env[1141]: time="2023-10-02T19:56:38.782960833Z" level=error msg="Failed to pipe stderr of container \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\"" error="reading from a closed fifo" Oct 2 19:56:38.784982 env[1141]: time="2023-10-02T19:56:38.784919793Z" level=error msg="StartContainer for \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:38.785489 kubelet[1445]: E1002 19:56:38.785150 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f" Oct 2 19:56:38.785489 kubelet[1445]: E1002 19:56:38.785254 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:38.785489 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:38.785489 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:56:38.785664 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jd9mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:38.785742 kubelet[1445]: E1002 19:56:38.785303 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:56:39.234903 kubelet[1445]: E1002 19:56:39.234775 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:39.450295 kubelet[1445]: E1002 19:56:39.450238 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:39.452187 env[1141]: time="2023-10-02T19:56:39.452138027Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:56:39.466841 kubelet[1445]: I1002 19:56:39.466799 1445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xg947" podStartSLOduration=-9.223372019388014e+09 pod.CreationTimestamp="2023-10-02 19:56:22 +0000 UTC" firstStartedPulling="2023-10-02 19:56:30.666851606 +0000 UTC m=+22.255609384" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:56:33.449058539 +0000 UTC m=+25.037816317" watchObservedRunningTime="2023-10-02 19:56:39.466761987 +0000 UTC m=+31.055519765" Oct 2 19:56:39.485835 env[1141]: time="2023-10-02T19:56:39.485535556Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\"" Oct 2 19:56:39.486431 env[1141]: time="2023-10-02T19:56:39.486295792Z" level=info msg="StartContainer for \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\"" Oct 2 19:56:39.497272 systemd[1]: run-containerd-runc-k8s.io-ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f-runc.pbVB0b.mount: Deactivated successfully. Oct 2 19:56:39.497386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f-rootfs.mount: Deactivated successfully. Oct 2 19:56:39.505141 systemd[1]: Started cri-containerd-a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a.scope. Oct 2 19:56:39.521428 systemd[1]: cri-containerd-a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a.scope: Deactivated successfully. Oct 2 19:56:39.524645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a-rootfs.mount: Deactivated successfully. Oct 2 19:56:39.528686 env[1141]: time="2023-10-02T19:56:39.528628606Z" level=info msg="shim disconnected" id=a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a Oct 2 19:56:39.528886 env[1141]: time="2023-10-02T19:56:39.528866462Z" level=warning msg="cleaning up after shim disconnected" id=a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a namespace=k8s.io Oct 2 19:56:39.528969 env[1141]: time="2023-10-02T19:56:39.528955878Z" level=info msg="cleaning up dead shim" Oct 2 19:56:39.536672 env[1141]: time="2023-10-02T19:56:39.536629142Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1820 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:39.537029 env[1141]: time="2023-10-02T19:56:39.536975569Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:56:39.537253 env[1141]: time="2023-10-02T19:56:39.537206267Z" level=error msg="Failed to pipe stderr of container \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\"" error="reading from a closed fifo" Oct 2 19:56:39.540380 env[1141]: time="2023-10-02T19:56:39.540341467Z" level=error msg="Failed to pipe stdout of container \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\"" error="reading from a closed fifo" Oct 2 19:56:39.542746 env[1141]: time="2023-10-02T19:56:39.542686798Z" level=error msg="StartContainer for \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:39.543065 kubelet[1445]: E1002 19:56:39.543019 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a" Oct 2 19:56:39.543495 kubelet[1445]: E1002 19:56:39.543141 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:39.543495 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:39.543495 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:56:39.543495 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jd9mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:39.543865 kubelet[1445]: E1002 19:56:39.543191 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:56:40.235775 kubelet[1445]: E1002 19:56:40.235743 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:40.452700 kubelet[1445]: I1002 19:56:40.452677 1445 scope.go:115] "RemoveContainer" containerID="ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f" Oct 2 19:56:40.453193 kubelet[1445]: I1002 19:56:40.453174 1445 scope.go:115] "RemoveContainer" containerID="ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f" Oct 2 19:56:40.454194 env[1141]: time="2023-10-02T19:56:40.454162697Z" level=info msg="RemoveContainer for \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\"" Oct 2 19:56:40.455037 env[1141]: time="2023-10-02T19:56:40.455009764Z" level=info msg="RemoveContainer for \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\"" Oct 2 19:56:40.455141 env[1141]: time="2023-10-02T19:56:40.455080906Z" level=error msg="RemoveContainer for \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\" failed" error="failed to set removing state for container \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\": container is already in removing state" Oct 2 19:56:40.455236 kubelet[1445]: E1002 19:56:40.455196 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\": container is already in removing state" containerID="ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f" Oct 2 19:56:40.455297 kubelet[1445]: E1002 19:56:40.455249 1445 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f": container is already in removing state; Skipping pod "cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)" Oct 2 19:56:40.455336 kubelet[1445]: E1002 19:56:40.455320 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:40.455721 kubelet[1445]: E1002 19:56:40.455547 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:56:40.457231 env[1141]: time="2023-10-02T19:56:40.457205492Z" level=info msg="RemoveContainer for \"ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f\" returns successfully" Oct 2 19:56:41.236846 kubelet[1445]: E1002 19:56:41.236788 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:41.455693 kubelet[1445]: E1002 19:56:41.455347 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:41.455693 kubelet[1445]: E1002 19:56:41.455666 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:56:41.879331 kubelet[1445]: W1002 19:56:41.878337 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice/cri-containerd-ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f.scope WatchSource:0}: container "ff51aeb101f6dcbeee65dbe890930fdb6847b4cb5f68e316e2c8fa902b6f530f" in namespace "k8s.io": not found Oct 2 19:56:42.237212 kubelet[1445]: E1002 19:56:42.236867 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:43.237634 kubelet[1445]: E1002 19:56:43.237592 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.237804 kubelet[1445]: E1002 19:56:44.237747 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:44.983019 kubelet[1445]: W1002 19:56:44.982982 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice/cri-containerd-a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a.scope WatchSource:0}: task a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a not found: not found Oct 2 19:56:45.238926 kubelet[1445]: E1002 19:56:45.238603 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:46.238717 kubelet[1445]: E1002 19:56:46.238684 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:47.239111 kubelet[1445]: E1002 19:56:47.239077 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:48.240172 kubelet[1445]: E1002 19:56:48.240129 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:48.943915 update_engine[1134]: I1002 19:56:48.943872 1134 update_attempter.cc:505] Updating boot flags... Oct 2 19:56:49.216010 kubelet[1445]: E1002 19:56:49.215669 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:49.240843 kubelet[1445]: E1002 19:56:49.240791 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:50.241636 kubelet[1445]: E1002 19:56:50.241606 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:51.242131 kubelet[1445]: E1002 19:56:51.242097 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:52.242595 kubelet[1445]: E1002 19:56:52.242554 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:53.242925 kubelet[1445]: E1002 19:56:53.242879 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:54.243805 kubelet[1445]: E1002 19:56:54.243768 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.244090 kubelet[1445]: E1002 19:56:55.244025 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:55.399337 kubelet[1445]: E1002 19:56:55.399304 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:55.401956 env[1141]: time="2023-10-02T19:56:55.401900046Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:56:55.414686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268201348.mount: Deactivated successfully. Oct 2 19:56:55.416658 env[1141]: time="2023-10-02T19:56:55.416564927Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\"" Oct 2 19:56:55.417238 env[1141]: time="2023-10-02T19:56:55.417209106Z" level=info msg="StartContainer for \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\"" Oct 2 19:56:55.433486 systemd[1]: Started cri-containerd-280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413.scope. Oct 2 19:56:55.465094 systemd[1]: cri-containerd-280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413.scope: Deactivated successfully. Oct 2 19:56:55.471528 env[1141]: time="2023-10-02T19:56:55.471481287Z" level=info msg="shim disconnected" id=280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413 Oct 2 19:56:55.471776 env[1141]: time="2023-10-02T19:56:55.471754621Z" level=warning msg="cleaning up after shim disconnected" id=280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413 namespace=k8s.io Oct 2 19:56:55.471876 env[1141]: time="2023-10-02T19:56:55.471861891Z" level=info msg="cleaning up dead shim" Oct 2 19:56:55.479739 env[1141]: time="2023-10-02T19:56:55.479696143Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:56:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1872 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:56:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:56:55.480149 env[1141]: time="2023-10-02T19:56:55.480089066Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:56:55.480376 env[1141]: time="2023-10-02T19:56:55.480345481Z" level=error msg="Failed to pipe stderr of container \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\"" error="reading from a closed fifo" Oct 2 19:56:55.480517 env[1141]: time="2023-10-02T19:56:55.480468430Z" level=error msg="Failed to pipe stdout of container \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\"" error="reading from a closed fifo" Oct 2 19:56:55.482226 env[1141]: time="2023-10-02T19:56:55.482177506Z" level=error msg="StartContainer for \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:56:55.482439 kubelet[1445]: E1002 19:56:55.482413 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413" Oct 2 19:56:55.482551 kubelet[1445]: E1002 19:56:55.482522 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:56:55.482551 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:56:55.482551 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:56:55.482551 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jd9mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:56:55.482693 kubelet[1445]: E1002 19:56:55.482567 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:56:56.244220 kubelet[1445]: E1002 19:56:56.244156 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:56.410160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413-rootfs.mount: Deactivated successfully. Oct 2 19:56:56.475107 kubelet[1445]: I1002 19:56:56.475082 1445 scope.go:115] "RemoveContainer" containerID="a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a" Oct 2 19:56:56.475763 kubelet[1445]: I1002 19:56:56.475734 1445 scope.go:115] "RemoveContainer" containerID="a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a" Oct 2 19:56:56.476343 env[1141]: time="2023-10-02T19:56:56.476304245Z" level=info msg="RemoveContainer for \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\"" Oct 2 19:56:56.477048 env[1141]: time="2023-10-02T19:56:56.477021341Z" level=info msg="RemoveContainer for \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\"" Oct 2 19:56:56.477241 env[1141]: time="2023-10-02T19:56:56.477207884Z" level=error msg="RemoveContainer for \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\" failed" error="failed to set removing state for container \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\": container is already in removing state" Oct 2 19:56:56.477496 kubelet[1445]: E1002 19:56:56.477481 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\": container is already in removing state" containerID="a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a" Oct 2 19:56:56.477608 kubelet[1445]: E1002 19:56:56.477595 1445 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a": container is already in removing state; Skipping pod "cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)" Oct 2 19:56:56.477711 kubelet[1445]: E1002 19:56:56.477699 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:56:56.477986 kubelet[1445]: E1002 19:56:56.477969 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:56:56.478613 env[1141]: time="2023-10-02T19:56:56.478569283Z" level=info msg="RemoveContainer for \"a1ed305c126f427d2071cc18675183f179a905e42880f6467a702b977ed72b2a\" returns successfully" Oct 2 19:56:57.244758 kubelet[1445]: E1002 19:56:57.244716 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:58.245818 kubelet[1445]: E1002 19:56:58.245783 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:56:58.575855 kubelet[1445]: W1002 19:56:58.575756 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice/cri-containerd-280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413.scope WatchSource:0}: task 280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413 not found: not found Oct 2 19:56:59.246845 kubelet[1445]: E1002 19:56:59.246745 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:00.247225 kubelet[1445]: E1002 19:57:00.247167 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:01.247613 kubelet[1445]: E1002 19:57:01.247567 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:02.248465 kubelet[1445]: E1002 19:57:02.248417 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:03.248882 kubelet[1445]: E1002 19:57:03.248838 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:04.249460 kubelet[1445]: E1002 19:57:04.249418 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:05.250505 kubelet[1445]: E1002 19:57:05.250466 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:06.251296 kubelet[1445]: E1002 19:57:06.251226 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:07.251711 kubelet[1445]: E1002 19:57:07.251650 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:08.252324 kubelet[1445]: E1002 19:57:08.252298 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.216310 kubelet[1445]: E1002 19:57:09.215970 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:09.253601 kubelet[1445]: E1002 19:57:09.253561 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:10.253939 kubelet[1445]: E1002 19:57:10.253892 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.254608 kubelet[1445]: E1002 19:57:11.254565 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:11.399719 kubelet[1445]: E1002 19:57:11.399686 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:11.400512 kubelet[1445]: E1002 19:57:11.400485 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:57:12.255267 kubelet[1445]: E1002 19:57:12.255237 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:13.256466 kubelet[1445]: E1002 19:57:13.256427 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:14.257216 kubelet[1445]: E1002 19:57:14.257170 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:15.257917 kubelet[1445]: E1002 19:57:15.257872 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:16.258368 kubelet[1445]: E1002 19:57:16.258335 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:17.259814 kubelet[1445]: E1002 19:57:17.259772 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:18.260850 kubelet[1445]: E1002 19:57:18.260796 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:19.261320 kubelet[1445]: E1002 19:57:19.261263 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:20.261432 kubelet[1445]: E1002 19:57:20.261389 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:21.262472 kubelet[1445]: E1002 19:57:21.262430 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:22.263851 kubelet[1445]: E1002 19:57:22.263804 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:23.264815 kubelet[1445]: E1002 19:57:23.264765 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:24.265045 kubelet[1445]: E1002 19:57:24.265003 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:25.265752 kubelet[1445]: E1002 19:57:25.265708 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:25.398888 kubelet[1445]: E1002 19:57:25.398845 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:25.401458 env[1141]: time="2023-10-02T19:57:25.401408851Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:57:25.410385 env[1141]: time="2023-10-02T19:57:25.410273092Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\"" Oct 2 19:57:25.410749 env[1141]: time="2023-10-02T19:57:25.410720528Z" level=info msg="StartContainer for \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\"" Oct 2 19:57:25.434020 systemd[1]: Started cri-containerd-8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44.scope. Oct 2 19:57:25.446427 systemd[1]: cri-containerd-8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44.scope: Deactivated successfully. Oct 2 19:57:25.449667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44-rootfs.mount: Deactivated successfully. Oct 2 19:57:25.454697 env[1141]: time="2023-10-02T19:57:25.454639976Z" level=info msg="shim disconnected" id=8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44 Oct 2 19:57:25.454697 env[1141]: time="2023-10-02T19:57:25.454698575Z" level=warning msg="cleaning up after shim disconnected" id=8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44 namespace=k8s.io Oct 2 19:57:25.454883 env[1141]: time="2023-10-02T19:57:25.454709335Z" level=info msg="cleaning up dead shim" Oct 2 19:57:25.462673 env[1141]: time="2023-10-02T19:57:25.462579985Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:57:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1914 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:57:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:57:25.462880 env[1141]: time="2023-10-02T19:57:25.462824423Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:57:25.466362 env[1141]: time="2023-10-02T19:57:25.466318551Z" level=error msg="Failed to pipe stdout of container \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\"" error="reading from a closed fifo" Oct 2 19:57:25.468428 env[1141]: time="2023-10-02T19:57:25.468396053Z" level=error msg="Failed to pipe stderr of container \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\"" error="reading from a closed fifo" Oct 2 19:57:25.470019 env[1141]: time="2023-10-02T19:57:25.469970319Z" level=error msg="StartContainer for \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:57:25.470514 kubelet[1445]: E1002 19:57:25.470348 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44" Oct 2 19:57:25.470514 kubelet[1445]: E1002 19:57:25.470457 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:57:25.470514 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:57:25.470514 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:57:25.470696 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jd9mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:57:25.470753 kubelet[1445]: E1002 19:57:25.470492 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:57:25.520187 kubelet[1445]: I1002 19:57:25.518764 1445 scope.go:115] "RemoveContainer" containerID="280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413" Oct 2 19:57:25.520187 kubelet[1445]: I1002 19:57:25.519227 1445 scope.go:115] "RemoveContainer" containerID="280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413" Oct 2 19:57:25.521855 env[1141]: time="2023-10-02T19:57:25.521665497Z" level=info msg="RemoveContainer for \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\"" Oct 2 19:57:25.522121 env[1141]: time="2023-10-02T19:57:25.522020894Z" level=info msg="RemoveContainer for \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\"" Oct 2 19:57:25.522433 env[1141]: time="2023-10-02T19:57:25.522171652Z" level=error msg="RemoveContainer for \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\" failed" error="failed to set removing state for container \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\": container is already in removing state" Oct 2 19:57:25.523169 kubelet[1445]: E1002 19:57:25.522635 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\": container is already in removing state" containerID="280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413" Oct 2 19:57:25.523169 kubelet[1445]: E1002 19:57:25.522665 1445 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413": container is already in removing state; Skipping pod "cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)" Oct 2 19:57:25.523169 kubelet[1445]: E1002 19:57:25.522716 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:25.523169 kubelet[1445]: E1002 19:57:25.523113 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:57:25.524088 env[1141]: time="2023-10-02T19:57:25.524045156Z" level=info msg="RemoveContainer for \"280bed18de62bf439e5117029bc8cbf4c53810577004f98416ce8f66b7d44413\" returns successfully" Oct 2 19:57:26.266658 kubelet[1445]: E1002 19:57:26.266598 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:27.267546 kubelet[1445]: E1002 19:57:27.267501 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:28.267625 kubelet[1445]: E1002 19:57:28.267574 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:28.559816 kubelet[1445]: W1002 19:57:28.559696 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice/cri-containerd-8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44.scope WatchSource:0}: task 8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44 not found: not found Oct 2 19:57:29.216134 kubelet[1445]: E1002 19:57:29.216087 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:29.268435 kubelet[1445]: E1002 19:57:29.268391 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:30.269541 kubelet[1445]: E1002 19:57:30.269470 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:31.270634 kubelet[1445]: E1002 19:57:31.270598 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:32.273179 kubelet[1445]: E1002 19:57:32.272943 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:33.273222 kubelet[1445]: E1002 19:57:33.273158 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:34.273723 kubelet[1445]: E1002 19:57:34.273691 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:35.274650 kubelet[1445]: E1002 19:57:35.274622 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.275785 kubelet[1445]: E1002 19:57:36.275724 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:36.398939 kubelet[1445]: E1002 19:57:36.398905 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:36.399144 kubelet[1445]: E1002 19:57:36.399111 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:57:37.276597 kubelet[1445]: E1002 19:57:37.276551 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:38.277483 kubelet[1445]: E1002 19:57:38.277419 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:39.278177 kubelet[1445]: E1002 19:57:39.278046 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:40.279157 kubelet[1445]: E1002 19:57:40.279115 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:41.279808 kubelet[1445]: E1002 19:57:41.279757 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:42.280070 kubelet[1445]: E1002 19:57:42.280005 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:43.280872 kubelet[1445]: E1002 19:57:43.280829 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:44.281670 kubelet[1445]: E1002 19:57:44.281596 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:45.281788 kubelet[1445]: E1002 19:57:45.281741 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:46.282156 kubelet[1445]: E1002 19:57:46.282110 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:47.282473 kubelet[1445]: E1002 19:57:47.282411 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:48.283410 kubelet[1445]: E1002 19:57:48.283344 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.216143 kubelet[1445]: E1002 19:57:49.216087 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.283757 kubelet[1445]: E1002 19:57:49.283708 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:49.399672 kubelet[1445]: E1002 19:57:49.399634 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:49.400195 kubelet[1445]: E1002 19:57:49.400168 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:57:50.284034 kubelet[1445]: E1002 19:57:50.283998 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:51.284887 kubelet[1445]: E1002 19:57:51.284849 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:52.286146 kubelet[1445]: E1002 19:57:52.286118 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:53.287173 kubelet[1445]: E1002 19:57:53.287136 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:54.288225 kubelet[1445]: E1002 19:57:54.288176 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:55.289251 kubelet[1445]: E1002 19:57:55.289202 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:56.290014 kubelet[1445]: E1002 19:57:56.289984 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:57.291095 kubelet[1445]: E1002 19:57:57.291052 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:58.291365 kubelet[1445]: E1002 19:57:58.291333 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:57:58.399582 kubelet[1445]: E1002 19:57:58.399555 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:57:59.292297 kubelet[1445]: E1002 19:57:59.292253 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:00.293538 kubelet[1445]: E1002 19:58:00.293263 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:00.399098 kubelet[1445]: E1002 19:58:00.399063 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:00.399342 kubelet[1445]: E1002 19:58:00.399269 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:58:01.293846 kubelet[1445]: E1002 19:58:01.293806 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:02.294472 kubelet[1445]: E1002 19:58:02.294432 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:03.295443 kubelet[1445]: E1002 19:58:03.295402 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:04.295964 kubelet[1445]: E1002 19:58:04.295926 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:05.296912 kubelet[1445]: E1002 19:58:05.296884 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:06.298038 kubelet[1445]: E1002 19:58:06.297997 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:07.298841 kubelet[1445]: E1002 19:58:07.298799 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:08.299820 kubelet[1445]: E1002 19:58:08.299753 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.216036 kubelet[1445]: E1002 19:58:09.215992 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.300758 kubelet[1445]: E1002 19:58:09.300724 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:09.321496 kubelet[1445]: E1002 19:58:09.321467 1445 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:58:10.301928 kubelet[1445]: E1002 19:58:10.301849 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:11.302724 kubelet[1445]: E1002 19:58:11.302658 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:12.303400 kubelet[1445]: E1002 19:58:12.303333 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:13.303746 kubelet[1445]: E1002 19:58:13.303700 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:14.304274 kubelet[1445]: E1002 19:58:14.304223 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:14.308335 kubelet[1445]: E1002 19:58:14.308297 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:14.399503 kubelet[1445]: E1002 19:58:14.399470 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:14.401384 env[1141]: time="2023-10-02T19:58:14.401326697Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:58:14.411741 env[1141]: time="2023-10-02T19:58:14.411694744Z" level=info msg="CreateContainer within sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\"" Oct 2 19:58:14.412386 env[1141]: time="2023-10-02T19:58:14.412335702Z" level=info msg="StartContainer for \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\"" Oct 2 19:58:14.430943 systemd[1]: run-containerd-runc-k8s.io-d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f-runc.4xxi8y.mount: Deactivated successfully. Oct 2 19:58:14.432248 systemd[1]: Started cri-containerd-d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f.scope. Oct 2 19:58:14.461801 systemd[1]: cri-containerd-d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f.scope: Deactivated successfully. Oct 2 19:58:14.468873 env[1141]: time="2023-10-02T19:58:14.468820085Z" level=info msg="shim disconnected" id=d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f Oct 2 19:58:14.468873 env[1141]: time="2023-10-02T19:58:14.468869885Z" level=warning msg="cleaning up after shim disconnected" id=d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f namespace=k8s.io Oct 2 19:58:14.469072 env[1141]: time="2023-10-02T19:58:14.468881645Z" level=info msg="cleaning up dead shim" Oct 2 19:58:14.476506 env[1141]: time="2023-10-02T19:58:14.476460182Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:58:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1958 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:58:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:58:14.476767 env[1141]: time="2023-10-02T19:58:14.476717581Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:58:14.481379 env[1141]: time="2023-10-02T19:58:14.481325806Z" level=error msg="Failed to pipe stdout of container \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\"" error="reading from a closed fifo" Oct 2 19:58:14.481450 env[1141]: time="2023-10-02T19:58:14.481426966Z" level=error msg="Failed to pipe stderr of container \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\"" error="reading from a closed fifo" Oct 2 19:58:14.483000 env[1141]: time="2023-10-02T19:58:14.482947841Z" level=error msg="StartContainer for \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:58:14.483310 kubelet[1445]: E1002 19:58:14.483271 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f" Oct 2 19:58:14.483422 kubelet[1445]: E1002 19:58:14.483396 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:58:14.483422 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:58:14.483422 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:58:14.483422 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jd9mb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:58:14.483594 kubelet[1445]: E1002 19:58:14.483431 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:58:14.592569 kubelet[1445]: I1002 19:58:14.592399 1445 scope.go:115] "RemoveContainer" containerID="8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44" Oct 2 19:58:14.592851 kubelet[1445]: I1002 19:58:14.592830 1445 scope.go:115] "RemoveContainer" containerID="8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44" Oct 2 19:58:14.594378 env[1141]: time="2023-10-02T19:58:14.594306413Z" level=info msg="RemoveContainer for \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\"" Oct 2 19:58:14.594494 env[1141]: time="2023-10-02T19:58:14.594459052Z" level=info msg="RemoveContainer for \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\"" Oct 2 19:58:14.594648 env[1141]: time="2023-10-02T19:58:14.594614892Z" level=error msg="RemoveContainer for \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\" failed" error="failed to set removing state for container \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\": container is already in removing state" Oct 2 19:58:14.595328 kubelet[1445]: E1002 19:58:14.594780 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\": container is already in removing state" containerID="8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44" Oct 2 19:58:14.595328 kubelet[1445]: E1002 19:58:14.594841 1445 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44": container is already in removing state; Skipping pod "cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)" Oct 2 19:58:14.595328 kubelet[1445]: E1002 19:58:14.594911 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:14.595328 kubelet[1445]: E1002 19:58:14.595188 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:58:14.596629 env[1141]: time="2023-10-02T19:58:14.596529766Z" level=info msg="RemoveContainer for \"8bc57318207308aa9e5f8efea0185e837ee770aa35b59a620aed856482041b44\" returns successfully" Oct 2 19:58:15.304928 kubelet[1445]: E1002 19:58:15.304443 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:15.407319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f-rootfs.mount: Deactivated successfully. Oct 2 19:58:16.305590 kubelet[1445]: E1002 19:58:16.305529 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.305726 kubelet[1445]: E1002 19:58:17.305668 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:17.573155 kubelet[1445]: W1002 19:58:17.573045 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice/cri-containerd-d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f.scope WatchSource:0}: task d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f not found: not found Oct 2 19:58:18.305845 kubelet[1445]: E1002 19:58:18.305808 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:19.306462 kubelet[1445]: E1002 19:58:19.306421 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:19.308749 kubelet[1445]: E1002 19:58:19.308719 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:20.307115 kubelet[1445]: E1002 19:58:20.307069 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:21.307435 kubelet[1445]: E1002 19:58:21.307367 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:22.308107 kubelet[1445]: E1002 19:58:22.308071 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:23.309628 kubelet[1445]: E1002 19:58:23.309592 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.309718 kubelet[1445]: E1002 19:58:24.309684 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:24.310195 kubelet[1445]: E1002 19:58:24.310167 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:25.310092 kubelet[1445]: E1002 19:58:25.310044 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:26.310418 kubelet[1445]: E1002 19:58:26.310356 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:27.311289 kubelet[1445]: E1002 19:58:27.311211 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:28.312120 kubelet[1445]: E1002 19:58:28.312060 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:28.399141 kubelet[1445]: E1002 19:58:28.399113 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:28.399415 kubelet[1445]: E1002 19:58:28.399389 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:58:29.216368 kubelet[1445]: E1002 19:58:29.216308 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:29.311131 kubelet[1445]: E1002 19:58:29.311097 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:29.312235 kubelet[1445]: E1002 19:58:29.312211 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:30.313085 kubelet[1445]: E1002 19:58:30.312912 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:31.313534 kubelet[1445]: E1002 19:58:31.313474 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:32.313979 kubelet[1445]: E1002 19:58:32.313930 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:33.314680 kubelet[1445]: E1002 19:58:33.314641 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:34.312733 kubelet[1445]: E1002 19:58:34.312705 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:34.314900 kubelet[1445]: E1002 19:58:34.314877 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:35.315877 kubelet[1445]: E1002 19:58:35.315843 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:36.316450 kubelet[1445]: E1002 19:58:36.316376 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:37.316944 kubelet[1445]: E1002 19:58:37.316904 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:38.318106 kubelet[1445]: E1002 19:58:38.318054 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:39.313679 kubelet[1445]: E1002 19:58:39.313655 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:39.318879 kubelet[1445]: E1002 19:58:39.318812 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:40.319192 kubelet[1445]: E1002 19:58:40.319134 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.320019 kubelet[1445]: E1002 19:58:41.319962 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:41.398801 kubelet[1445]: E1002 19:58:41.398771 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:41.399236 kubelet[1445]: E1002 19:58:41.399212 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:58:42.321075 kubelet[1445]: E1002 19:58:42.321006 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:43.321865 kubelet[1445]: E1002 19:58:43.321815 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:44.315133 kubelet[1445]: E1002 19:58:44.315100 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:44.322258 kubelet[1445]: E1002 19:58:44.322221 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:45.323410 kubelet[1445]: E1002 19:58:45.323356 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:46.323808 kubelet[1445]: E1002 19:58:46.323775 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:47.324842 kubelet[1445]: E1002 19:58:47.324806 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:48.326255 kubelet[1445]: E1002 19:58:48.326220 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:49.216604 kubelet[1445]: E1002 19:58:49.216573 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:49.316522 kubelet[1445]: E1002 19:58:49.316490 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:49.326705 kubelet[1445]: E1002 19:58:49.326675 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:50.327549 kubelet[1445]: E1002 19:58:50.327487 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:51.327848 kubelet[1445]: E1002 19:58:51.327798 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.328706 kubelet[1445]: E1002 19:58:52.328649 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:52.398563 kubelet[1445]: E1002 19:58:52.398510 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:58:52.398717 kubelet[1445]: E1002 19:58:52.398711 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:58:53.329126 kubelet[1445]: E1002 19:58:53.329085 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:54.317410 kubelet[1445]: E1002 19:58:54.317386 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:54.329722 kubelet[1445]: E1002 19:58:54.329695 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:55.330359 kubelet[1445]: E1002 19:58:55.330326 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:56.330861 kubelet[1445]: E1002 19:58:56.330817 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:57.331819 kubelet[1445]: E1002 19:58:57.331773 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:58.332256 kubelet[1445]: E1002 19:58:58.332212 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:58:59.318085 kubelet[1445]: E1002 19:58:59.318060 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:58:59.333262 kubelet[1445]: E1002 19:58:59.333234 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:00.333587 kubelet[1445]: E1002 19:59:00.333551 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:01.334328 kubelet[1445]: E1002 19:59:01.334272 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:02.335519 kubelet[1445]: E1002 19:59:02.335424 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:03.336016 kubelet[1445]: E1002 19:59:03.335956 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:04.319227 kubelet[1445]: E1002 19:59:04.319167 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:04.336445 kubelet[1445]: E1002 19:59:04.336410 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:05.337400 kubelet[1445]: E1002 19:59:05.337334 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:05.398521 kubelet[1445]: E1002 19:59:05.398492 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:05.398780 kubelet[1445]: E1002 19:59:05.398588 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:05.398957 kubelet[1445]: E1002 19:59:05.398943 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:59:06.338118 kubelet[1445]: E1002 19:59:06.338073 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:07.338667 kubelet[1445]: E1002 19:59:07.338621 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:08.338802 kubelet[1445]: E1002 19:59:08.338751 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:09.215991 kubelet[1445]: E1002 19:59:09.215910 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:09.320165 kubelet[1445]: E1002 19:59:09.320133 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:09.339473 kubelet[1445]: E1002 19:59:09.339409 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:10.339609 kubelet[1445]: E1002 19:59:10.339550 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:11.340047 kubelet[1445]: E1002 19:59:11.340004 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:12.340759 kubelet[1445]: E1002 19:59:12.340712 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:13.341178 kubelet[1445]: E1002 19:59:13.341114 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:14.320978 kubelet[1445]: E1002 19:59:14.320943 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:14.341355 kubelet[1445]: E1002 19:59:14.341317 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:15.342214 kubelet[1445]: E1002 19:59:15.342178 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:16.343232 kubelet[1445]: E1002 19:59:16.343177 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:17.343909 kubelet[1445]: E1002 19:59:17.343865 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:18.344036 kubelet[1445]: E1002 19:59:18.343984 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:19.321800 kubelet[1445]: E1002 19:59:19.321777 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:19.344431 kubelet[1445]: E1002 19:59:19.344403 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:19.399803 kubelet[1445]: E1002 19:59:19.399775 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:19.399998 kubelet[1445]: E1002 19:59:19.399976 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-wxnxh_kube-system(24e9887e-8f45-47a9-a855-9f7ea67bebf8)\"" pod="kube-system/cilium-wxnxh" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 Oct 2 19:59:20.344914 kubelet[1445]: E1002 19:59:20.344874 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:21.346376 kubelet[1445]: E1002 19:59:21.346334 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:22.347386 kubelet[1445]: E1002 19:59:22.347329 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:23.347953 kubelet[1445]: E1002 19:59:23.347888 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:24.322996 kubelet[1445]: E1002 19:59:24.322970 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:24.348182 kubelet[1445]: E1002 19:59:24.348149 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:25.348779 kubelet[1445]: E1002 19:59:25.348727 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:26.349584 kubelet[1445]: E1002 19:59:26.349514 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:27.350273 kubelet[1445]: E1002 19:59:27.350222 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:28.350748 kubelet[1445]: E1002 19:59:28.350703 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:28.725532 env[1141]: time="2023-10-02T19:59:28.725471031Z" level=info msg="StopPodSandbox for \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\"" Oct 2 19:59:28.727111 env[1141]: time="2023-10-02T19:59:28.725547152Z" level=info msg="Container to stop \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:59:28.726727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef-shm.mount: Deactivated successfully. Oct 2 19:59:28.733771 systemd[1]: cri-containerd-914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef.scope: Deactivated successfully. Oct 2 19:59:28.734658 kernel: kauditd_printk_skb: 281 callbacks suppressed Oct 2 19:59:28.734755 kernel: audit: type=1334 audit(1696276768.733:650): prog-id=65 op=UNLOAD Oct 2 19:59:28.733000 audit: BPF prog-id=65 op=UNLOAD Oct 2 19:59:28.739000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:59:28.740293 kernel: audit: type=1334 audit(1696276768.739:651): prog-id=68 op=UNLOAD Oct 2 19:59:28.753645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef-rootfs.mount: Deactivated successfully. Oct 2 19:59:28.759019 env[1141]: time="2023-10-02T19:59:28.758966061Z" level=info msg="shim disconnected" id=914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef Oct 2 19:59:28.759761 env[1141]: time="2023-10-02T19:59:28.759731388Z" level=warning msg="cleaning up after shim disconnected" id=914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef namespace=k8s.io Oct 2 19:59:28.759860 env[1141]: time="2023-10-02T19:59:28.759846349Z" level=info msg="cleaning up dead shim" Oct 2 19:59:28.767980 env[1141]: time="2023-10-02T19:59:28.767940144Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1995 runtime=io.containerd.runc.v2\n" Oct 2 19:59:28.768436 env[1141]: time="2023-10-02T19:59:28.768405588Z" level=info msg="TearDown network for sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" successfully" Oct 2 19:59:28.768534 env[1141]: time="2023-10-02T19:59:28.768516909Z" level=info msg="StopPodSandbox for \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" returns successfully" Oct 2 19:59:28.886388 kubelet[1445]: I1002 19:59:28.885991 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hubble-tls\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.886388 kubelet[1445]: I1002 19:59:28.886204 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd9mb\" (UniqueName: \"kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-kube-api-access-jd9mb\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.886388 kubelet[1445]: I1002 19:59:28.886229 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-net\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.886388 kubelet[1445]: I1002 19:59:28.886366 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.886633 kubelet[1445]: I1002 19:59:28.886425 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.888880 kubelet[1445]: I1002 19:59:28.886806 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-kernel\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.888880 kubelet[1445]: W1002 19:59:28.887129 1445 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/24e9887e-8f45-47a9-a855-9f7ea67bebf8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:59:28.889335 kubelet[1445]: I1002 19:59:28.889196 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:59:28.889335 kubelet[1445]: I1002 19:59:28.889332 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-config-path\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889437 kubelet[1445]: I1002 19:59:28.889360 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-cgroup\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889467 kubelet[1445]: I1002 19:59:28.889451 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.889520 kubelet[1445]: I1002 19:59:28.889497 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cni-path\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889551 kubelet[1445]: I1002 19:59:28.889523 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-bpf-maps\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889580 kubelet[1445]: I1002 19:59:28.889557 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cni-path" (OuterVolumeSpecName: "cni-path") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.889607 kubelet[1445]: I1002 19:59:28.889578 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.889816 kubelet[1445]: I1002 19:59:28.889615 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24e9887e-8f45-47a9-a855-9f7ea67bebf8-clustermesh-secrets\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889859 kubelet[1445]: I1002 19:59:28.889825 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-etc-cni-netd\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889859 kubelet[1445]: I1002 19:59:28.889845 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-xtables-lock\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889842 systemd[1]: var-lib-kubelet-pods-24e9887e\x2d8f45\x2d47a9\x2da855\x2d9f7ea67bebf8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djd9mb.mount: Deactivated successfully. Oct 2 19:59:28.889992 kubelet[1445]: I1002 19:59:28.889880 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.889992 kubelet[1445]: I1002 19:59:28.889903 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.889992 kubelet[1445]: I1002 19:59:28.889919 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hostproc\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.889992 kubelet[1445]: I1002 19:59:28.889952 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hostproc" (OuterVolumeSpecName: "hostproc") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.889992 kubelet[1445]: I1002 19:59:28.889967 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-run\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.889984 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-lib-modules\") pod \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\" (UID: \"24e9887e-8f45-47a9-a855-9f7ea67bebf8\") " Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.890008 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.890029 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.890057 1445 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.890072 1445 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.890082 1445 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890126 kubelet[1445]: I1002 19:59:28.890105 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:59:28.890314 kubelet[1445]: I1002 19:59:28.890191 1445 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890314 kubelet[1445]: I1002 19:59:28.890206 1445 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890314 kubelet[1445]: I1002 19:59:28.890216 1445 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890314 kubelet[1445]: I1002 19:59:28.890228 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890314 kubelet[1445]: I1002 19:59:28.890237 1445 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.890436 kubelet[1445]: I1002 19:59:28.890380 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-kube-api-access-jd9mb" (OuterVolumeSpecName: "kube-api-access-jd9mb") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "kube-api-access-jd9mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:28.893090 kubelet[1445]: I1002 19:59:28.893059 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:59:28.893674 systemd[1]: var-lib-kubelet-pods-24e9887e\x2d8f45\x2d47a9\x2da855\x2d9f7ea67bebf8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:59:28.894539 kubelet[1445]: I1002 19:59:28.894510 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e9887e-8f45-47a9-a855-9f7ea67bebf8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "24e9887e-8f45-47a9-a855-9f7ea67bebf8" (UID: "24e9887e-8f45-47a9-a855-9f7ea67bebf8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:59:28.894894 systemd[1]: var-lib-kubelet-pods-24e9887e\x2d8f45\x2d47a9\x2da855\x2d9f7ea67bebf8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:59:28.991027 kubelet[1445]: I1002 19:59:28.990880 1445 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.991027 kubelet[1445]: I1002 19:59:28.990921 1445 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jd9mb\" (UniqueName: \"kubernetes.io/projected/24e9887e-8f45-47a9-a855-9f7ea67bebf8-kube-api-access-jd9mb\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.991027 kubelet[1445]: I1002 19:59:28.990933 1445 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24e9887e-8f45-47a9-a855-9f7ea67bebf8-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.991027 kubelet[1445]: I1002 19:59:28.990942 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:28.991027 kubelet[1445]: I1002 19:59:28.990951 1445 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24e9887e-8f45-47a9-a855-9f7ea67bebf8-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:59:29.215785 kubelet[1445]: E1002 19:59:29.215740 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:29.324198 kubelet[1445]: E1002 19:59:29.324094 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:29.351881 kubelet[1445]: E1002 19:59:29.351843 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:29.404367 systemd[1]: Removed slice kubepods-burstable-pod24e9887e_8f45_47a9_a855_9f7ea67bebf8.slice. Oct 2 19:59:29.701556 kubelet[1445]: I1002 19:59:29.701526 1445 scope.go:115] "RemoveContainer" containerID="d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f" Oct 2 19:59:29.703779 env[1141]: time="2023-10-02T19:59:29.703734254Z" level=info msg="RemoveContainer for \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\"" Oct 2 19:59:29.705927 env[1141]: time="2023-10-02T19:59:29.705901114Z" level=info msg="RemoveContainer for \"d20683cd6a925addd1b4d41fe3625dfd981fee70187e01ae41a1f831e676e71f\" returns successfully" Oct 2 19:59:30.352983 kubelet[1445]: E1002 19:59:30.352932 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.354010 kubelet[1445]: E1002 19:59:31.353961 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:31.401540 kubelet[1445]: I1002 19:59:31.401498 1445 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=24e9887e-8f45-47a9-a855-9f7ea67bebf8 path="/var/lib/kubelet/pods/24e9887e-8f45-47a9-a855-9f7ea67bebf8/volumes" Oct 2 19:59:31.724688 kubelet[1445]: I1002 19:59:31.724557 1445 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:59:31.724688 kubelet[1445]: E1002 19:59:31.724608 1445 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: E1002 19:59:31.724619 1445 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: E1002 19:59:31.724626 1445 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: E1002 19:59:31.724634 1445 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: E1002 19:59:31.724640 1445 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: I1002 19:59:31.724655 1445 memory_manager.go:346] "RemoveStaleState removing state" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: I1002 19:59:31.724660 1445 memory_manager.go:346] "RemoveStaleState removing state" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: I1002 19:59:31.724666 1445 memory_manager.go:346] "RemoveStaleState removing state" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.724688 kubelet[1445]: I1002 19:59:31.724671 1445 memory_manager.go:346] "RemoveStaleState removing state" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.729129 systemd[1]: Created slice kubepods-besteffort-pod1e3dd315_24f6_4a31_b945_a09b09949520.slice. Oct 2 19:59:31.733144 kubelet[1445]: I1002 19:59:31.732296 1445 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:59:31.733144 kubelet[1445]: I1002 19:59:31.732351 1445 memory_manager.go:346] "RemoveStaleState removing state" podUID="24e9887e-8f45-47a9-a855-9f7ea67bebf8" containerName="mount-cgroup" Oct 2 19:59:31.737138 systemd[1]: Created slice kubepods-burstable-pod8de52f76_f8ad_43ae_9ef4_27ad0d637d5c.slice. Oct 2 19:59:31.908653 kubelet[1445]: I1002 19:59:31.908523 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdtdk\" (UniqueName: \"kubernetes.io/projected/1e3dd315-24f6-4a31-b945-a09b09949520-kube-api-access-bdtdk\") pod \"cilium-operator-f59cbd8c6-cxdmd\" (UID: \"1e3dd315-24f6-4a31-b945-a09b09949520\") " pod="kube-system/cilium-operator-f59cbd8c6-cxdmd" Oct 2 19:59:31.908653 kubelet[1445]: I1002 19:59:31.908645 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-run\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.908847 kubelet[1445]: I1002 19:59:31.908764 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-config-path\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.908878 kubelet[1445]: I1002 19:59:31.908853 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-bpf-maps\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.908918 kubelet[1445]: I1002 19:59:31.908880 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hostproc\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.908950 kubelet[1445]: I1002 19:59:31.908932 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-cgroup\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909043 kubelet[1445]: I1002 19:59:31.908954 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-net\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909090 kubelet[1445]: I1002 19:59:31.909060 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-clustermesh-secrets\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909121 kubelet[1445]: I1002 19:59:31.909101 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-ipsec-secrets\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909146 kubelet[1445]: I1002 19:59:31.909140 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hubble-tls\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909214 kubelet[1445]: I1002 19:59:31.909188 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-kernel\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909248 kubelet[1445]: I1002 19:59:31.909223 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3dd315-24f6-4a31-b945-a09b09949520-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-cxdmd\" (UID: \"1e3dd315-24f6-4a31-b945-a09b09949520\") " pod="kube-system/cilium-operator-f59cbd8c6-cxdmd" Oct 2 19:59:31.909248 kubelet[1445]: I1002 19:59:31.909243 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cni-path\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909322 kubelet[1445]: I1002 19:59:31.909264 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-etc-cni-netd\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909322 kubelet[1445]: I1002 19:59:31.909297 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-lib-modules\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909322 kubelet[1445]: I1002 19:59:31.909318 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-xtables-lock\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:31.909433 kubelet[1445]: I1002 19:59:31.909338 1445 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4px62\" (UniqueName: \"kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-kube-api-access-4px62\") pod \"cilium-lk75f\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " pod="kube-system/cilium-lk75f" Oct 2 19:59:32.032827 kubelet[1445]: E1002 19:59:32.031903 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.032950 env[1141]: time="2023-10-02T19:59:32.032357925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cxdmd,Uid:1e3dd315-24f6-4a31-b945-a09b09949520,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:32.047467 env[1141]: time="2023-10-02T19:59:32.047246094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:32.047467 env[1141]: time="2023-10-02T19:59:32.047418696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:32.047467 env[1141]: time="2023-10-02T19:59:32.047430816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:32.047870 env[1141]: time="2023-10-02T19:59:32.047689418Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5 pid=2023 runtime=io.containerd.runc.v2 Oct 2 19:59:32.051841 kubelet[1445]: E1002 19:59:32.051805 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.052379 env[1141]: time="2023-10-02T19:59:32.052336939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lk75f,Uid:8de52f76-f8ad-43ae-9ef4-27ad0d637d5c,Namespace:kube-system,Attempt:0,}" Oct 2 19:59:32.063080 systemd[1]: Started cri-containerd-06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5.scope. Oct 2 19:59:32.069386 env[1141]: time="2023-10-02T19:59:32.067908234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:59:32.069386 env[1141]: time="2023-10-02T19:59:32.067962114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:59:32.069386 env[1141]: time="2023-10-02T19:59:32.067975834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:59:32.069386 env[1141]: time="2023-10-02T19:59:32.068114275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544 pid=2050 runtime=io.containerd.runc.v2 Oct 2 19:59:32.083815 systemd[1]: Started cri-containerd-45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544.scope. Oct 2 19:59:32.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.116772 kernel: audit: type=1400 audit(1696276772.106:652): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.116884 kernel: audit: type=1400 audit(1696276772.106:653): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.116907 kernel: audit: type=1400 audit(1696276772.107:654): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.116925 kernel: audit: type=1400 audit(1696276772.107:655): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.116945 kernel: audit: type=1400 audit(1696276772.107:656): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.116963 kernel: audit: type=1400 audit(1696276772.107:657): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.119222 kernel: audit: type=1400 audit(1696276772.107:658): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.119915 kernel: audit: type=1400 audit(1696276772.107:659): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.108000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.108000 audit: BPF prog-id=72 op=LOAD Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2023 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036633261353137353130613330313237373363633764663961653865 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2023 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036633261353137353130613330313237373363633764663961653865 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.109000 audit: BPF prog-id=73 op=LOAD Oct 2 19:59:32.109000 audit[2033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2023 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036633261353137353130613330313237373363633764663961653865 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit: BPF prog-id=74 op=LOAD Oct 2 19:59:32.111000 audit[2033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2023 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036633261353137353130613330313237373363633764663961653865 Oct 2 19:59:32.111000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:59:32.111000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { perfmon } for pid=2033 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit[2033]: AVC avc: denied { bpf } for pid=2033 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.111000 audit: BPF prog-id=75 op=LOAD Oct 2 19:59:32.111000 audit[2033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2023 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036633261353137353130613330313237373363633764663961653865 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.133000 audit: BPF prog-id=76 op=LOAD Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2050 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633039333135646464383466633436313735626366613835393765 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2050 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633039333135646464383466633436313735626366613835393765 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit: BPF prog-id=77 op=LOAD Oct 2 19:59:32.134000 audit[2058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2050 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633039333135646464383466633436313735626366613835393765 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit: BPF prog-id=78 op=LOAD Oct 2 19:59:32.134000 audit[2058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2050 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633039333135646464383466633436313735626366613835393765 Oct 2 19:59:32.134000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:59:32.134000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { perfmon } for pid=2058 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit[2058]: AVC avc: denied { bpf } for pid=2058 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:32.134000 audit: BPF prog-id=79 op=LOAD Oct 2 19:59:32.134000 audit[2058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2050 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:32.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435633039333135646464383466633436313735626366613835393765 Oct 2 19:59:32.141834 env[1141]: time="2023-10-02T19:59:32.141788074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cxdmd,Uid:1e3dd315-24f6-4a31-b945-a09b09949520,Namespace:kube-system,Attempt:0,} returns sandbox id \"06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5\"" Oct 2 19:59:32.142362 kubelet[1445]: E1002 19:59:32.142341 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.143411 env[1141]: time="2023-10-02T19:59:32.143376688Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:59:32.150165 env[1141]: time="2023-10-02T19:59:32.150122467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lk75f,Uid:8de52f76-f8ad-43ae-9ef4-27ad0d637d5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\"" Oct 2 19:59:32.150831 kubelet[1445]: E1002 19:59:32.150795 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.152608 env[1141]: time="2023-10-02T19:59:32.152574608Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:59:32.163210 env[1141]: time="2023-10-02T19:59:32.163133140Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\"" Oct 2 19:59:32.163633 env[1141]: time="2023-10-02T19:59:32.163605464Z" level=info msg="StartContainer for \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\"" Oct 2 19:59:32.180076 systemd[1]: Started cri-containerd-f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4.scope. Oct 2 19:59:32.205991 systemd[1]: cri-containerd-f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4.scope: Deactivated successfully. Oct 2 19:59:32.225741 env[1141]: time="2023-10-02T19:59:32.225687682Z" level=info msg="shim disconnected" id=f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4 Oct 2 19:59:32.225963 env[1141]: time="2023-10-02T19:59:32.225944404Z" level=warning msg="cleaning up after shim disconnected" id=f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4 namespace=k8s.io Oct 2 19:59:32.226021 env[1141]: time="2023-10-02T19:59:32.226009085Z" level=info msg="cleaning up dead shim" Oct 2 19:59:32.234447 env[1141]: time="2023-10-02T19:59:32.234399078Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2124 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:32.234856 env[1141]: time="2023-10-02T19:59:32.234800841Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:59:32.235376 env[1141]: time="2023-10-02T19:59:32.235329646Z" level=error msg="Failed to pipe stdout of container \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\"" error="reading from a closed fifo" Oct 2 19:59:32.236361 env[1141]: time="2023-10-02T19:59:32.236330335Z" level=error msg="Failed to pipe stderr of container \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\"" error="reading from a closed fifo" Oct 2 19:59:32.237890 env[1141]: time="2023-10-02T19:59:32.237829068Z" level=error msg="StartContainer for \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:32.238164 kubelet[1445]: E1002 19:59:32.238131 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4" Oct 2 19:59:32.238264 kubelet[1445]: E1002 19:59:32.238246 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:32.238264 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:32.238264 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:59:32.238264 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4px62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:32.238512 kubelet[1445]: E1002 19:59:32.238320 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:32.354692 kubelet[1445]: E1002 19:59:32.354638 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:32.708754 kubelet[1445]: E1002 19:59:32.708659 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:32.710847 env[1141]: time="2023-10-02T19:59:32.710804010Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:59:32.725050 env[1141]: time="2023-10-02T19:59:32.725002413Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\"" Oct 2 19:59:32.725854 env[1141]: time="2023-10-02T19:59:32.725826780Z" level=info msg="StartContainer for \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\"" Oct 2 19:59:32.745193 systemd[1]: Started cri-containerd-fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017.scope. Oct 2 19:59:32.760700 systemd[1]: cri-containerd-fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017.scope: Deactivated successfully. Oct 2 19:59:32.769440 env[1141]: time="2023-10-02T19:59:32.769384398Z" level=info msg="shim disconnected" id=fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017 Oct 2 19:59:32.769613 env[1141]: time="2023-10-02T19:59:32.769443839Z" level=warning msg="cleaning up after shim disconnected" id=fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017 namespace=k8s.io Oct 2 19:59:32.769613 env[1141]: time="2023-10-02T19:59:32.769453799Z" level=info msg="cleaning up dead shim" Oct 2 19:59:32.779845 env[1141]: time="2023-10-02T19:59:32.779759808Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2164 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:32.780108 env[1141]: time="2023-10-02T19:59:32.780034971Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:59:32.780262 env[1141]: time="2023-10-02T19:59:32.780213252Z" level=error msg="Failed to pipe stdout of container \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\"" error="reading from a closed fifo" Oct 2 19:59:32.780331 env[1141]: time="2023-10-02T19:59:32.780289333Z" level=error msg="Failed to pipe stderr of container \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\"" error="reading from a closed fifo" Oct 2 19:59:32.781897 env[1141]: time="2023-10-02T19:59:32.781840626Z" level=error msg="StartContainer for \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:32.782078 kubelet[1445]: E1002 19:59:32.782036 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017" Oct 2 19:59:32.782162 kubelet[1445]: E1002 19:59:32.782137 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:32.782162 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:32.782162 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:59:32.782162 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4px62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:32.782315 kubelet[1445]: E1002 19:59:32.782171 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:33.355332 kubelet[1445]: E1002 19:59:33.355263 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:33.425231 env[1141]: time="2023-10-02T19:59:33.425186548Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:33.426341 env[1141]: time="2023-10-02T19:59:33.426313398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:33.428011 env[1141]: time="2023-10-02T19:59:33.427984852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:59:33.428439 env[1141]: time="2023-10-02T19:59:33.428410095Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:59:33.430427 env[1141]: time="2023-10-02T19:59:33.430396312Z" level=info msg="CreateContainer within sandbox \"06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:59:33.442016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777685663.mount: Deactivated successfully. Oct 2 19:59:33.451956 env[1141]: time="2023-10-02T19:59:33.451902296Z" level=info msg="CreateContainer within sandbox \"06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\"" Oct 2 19:59:33.452671 env[1141]: time="2023-10-02T19:59:33.452646902Z" level=info msg="StartContainer for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\"" Oct 2 19:59:33.471610 systemd[1]: Started cri-containerd-d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7.scope. Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.495000 audit: BPF prog-id=80 op=LOAD Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2023 pid=2184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435626364616161656562623664343665383637656638393166323839 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2023 pid=2184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435626364616161656562623664343665383637656638393166323839 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit: BPF prog-id=81 op=LOAD Oct 2 19:59:33.496000 audit[2184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2023 pid=2184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435626364616161656562623664343665383637656638393166323839 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit: BPF prog-id=82 op=LOAD Oct 2 19:59:33.496000 audit[2184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2023 pid=2184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435626364616161656562623664343665383637656638393166323839 Oct 2 19:59:33.496000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:59:33.496000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { perfmon } for pid=2184 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit[2184]: AVC avc: denied { bpf } for pid=2184 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:59:33.496000 audit: BPF prog-id=83 op=LOAD Oct 2 19:59:33.496000 audit[2184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2023 pid=2184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:59:33.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435626364616161656562623664343665383637656638393166323839 Oct 2 19:59:33.509869 env[1141]: time="2023-10-02T19:59:33.509820790Z" level=info msg="StartContainer for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" returns successfully" Oct 2 19:59:33.567000 audit[2195]: AVC avc: denied { map_create } for pid=2195 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c255,c353 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c255,c353 tclass=bpf permissive=0 Oct 2 19:59:33.567000 audit[2195]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400070f768 a2=48 a3=0 items=0 ppid=2023 pid=2195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c255,c353 key=(null) Oct 2 19:59:33.567000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:59:33.713667 kubelet[1445]: I1002 19:59:33.713563 1445 scope.go:115] "RemoveContainer" containerID="f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4" Oct 2 19:59:33.714502 kubelet[1445]: I1002 19:59:33.713901 1445 scope.go:115] "RemoveContainer" containerID="f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4" Oct 2 19:59:33.717351 env[1141]: time="2023-10-02T19:59:33.717308921Z" level=info msg="RemoveContainer for \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\"" Oct 2 19:59:33.717555 kubelet[1445]: E1002 19:59:33.717512 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:33.717759 env[1141]: time="2023-10-02T19:59:33.717714165Z" level=info msg="RemoveContainer for \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\"" Oct 2 19:59:33.717850 env[1141]: time="2023-10-02T19:59:33.717819126Z" level=error msg="RemoveContainer for \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\" failed" error="failed to set removing state for container \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\": container is already in removing state" Oct 2 19:59:33.717981 kubelet[1445]: E1002 19:59:33.717956 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\": container is already in removing state" containerID="f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4" Oct 2 19:59:33.718044 kubelet[1445]: E1002 19:59:33.717991 1445 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4": container is already in removing state; Skipping pod "cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)" Oct 2 19:59:33.718044 kubelet[1445]: E1002 19:59:33.718039 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:33.718272 kubelet[1445]: E1002 19:59:33.718237 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:33.720523 env[1141]: time="2023-10-02T19:59:33.720484549Z" level=info msg="RemoveContainer for \"f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4\" returns successfully" Oct 2 19:59:34.325358 kubelet[1445]: E1002 19:59:34.325320 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:34.356001 kubelet[1445]: E1002 19:59:34.355965 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:34.720206 kubelet[1445]: E1002 19:59:34.720169 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:34.720206 kubelet[1445]: E1002 19:59:34.720210 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:34.720447 kubelet[1445]: E1002 19:59:34.720416 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:34.735445 kubelet[1445]: I1002 19:59:34.735402 1445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-cxdmd" podStartSLOduration=-9.223372033119423e+09 pod.CreationTimestamp="2023-10-02 19:59:31 +0000 UTC" firstStartedPulling="2023-10-02 19:59:32.143074846 +0000 UTC m=+203.731832584" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:59:33.736126882 +0000 UTC m=+205.324884660" watchObservedRunningTime="2023-10-02 19:59:34.735351992 +0000 UTC m=+206.324109770" Oct 2 19:59:35.330833 kubelet[1445]: W1002 19:59:35.330798 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de52f76_f8ad_43ae_9ef4_27ad0d637d5c.slice/cri-containerd-f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4.scope WatchSource:0}: container "f48eb2d661aa9291afbb1e695e99febed3b9d48ddc380bbb024510f988e887d4" in namespace "k8s.io": not found Oct 2 19:59:35.356538 kubelet[1445]: E1002 19:59:35.356496 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:36.357616 kubelet[1445]: E1002 19:59:36.357577 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:37.358248 kubelet[1445]: E1002 19:59:37.358191 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.358879 kubelet[1445]: E1002 19:59:38.358827 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:38.438139 kubelet[1445]: W1002 19:59:38.438097 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de52f76_f8ad_43ae_9ef4_27ad0d637d5c.slice/cri-containerd-fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017.scope WatchSource:0}: task fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017 not found: not found Oct 2 19:59:39.326649 kubelet[1445]: E1002 19:59:39.326607 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:39.358988 kubelet[1445]: E1002 19:59:39.358943 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:40.359540 kubelet[1445]: E1002 19:59:40.359487 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:41.359674 kubelet[1445]: E1002 19:59:41.359629 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:42.360553 kubelet[1445]: E1002 19:59:42.360507 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:43.361339 kubelet[1445]: E1002 19:59:43.361263 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:44.327207 kubelet[1445]: E1002 19:59:44.327166 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:44.362453 kubelet[1445]: E1002 19:59:44.362415 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:45.363061 kubelet[1445]: E1002 19:59:45.363014 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.363203 kubelet[1445]: E1002 19:59:46.363155 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:46.398957 kubelet[1445]: E1002 19:59:46.398926 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:46.400945 env[1141]: time="2023-10-02T19:59:46.400907209Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:59:46.409428 env[1141]: time="2023-10-02T19:59:46.409379548Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\"" Oct 2 19:59:46.409890 env[1141]: time="2023-10-02T19:59:46.409863712Z" level=info msg="StartContainer for \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\"" Oct 2 19:59:46.427686 systemd[1]: Started cri-containerd-ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56.scope. Oct 2 19:59:46.453397 systemd[1]: cri-containerd-ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56.scope: Deactivated successfully. Oct 2 19:59:46.568707 env[1141]: time="2023-10-02T19:59:46.568653731Z" level=info msg="shim disconnected" id=ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56 Oct 2 19:59:46.568707 env[1141]: time="2023-10-02T19:59:46.568706732Z" level=warning msg="cleaning up after shim disconnected" id=ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56 namespace=k8s.io Oct 2 19:59:46.568707 env[1141]: time="2023-10-02T19:59:46.568716252Z" level=info msg="cleaning up dead shim" Oct 2 19:59:46.576668 env[1141]: time="2023-10-02T19:59:46.576611467Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:59:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2239 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:59:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:59:46.576928 env[1141]: time="2023-10-02T19:59:46.576868748Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:59:46.577085 env[1141]: time="2023-10-02T19:59:46.577035989Z" level=error msg="Failed to pipe stdout of container \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\"" error="reading from a closed fifo" Oct 2 19:59:46.577126 env[1141]: time="2023-10-02T19:59:46.577085030Z" level=error msg="Failed to pipe stderr of container \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\"" error="reading from a closed fifo" Oct 2 19:59:46.578424 env[1141]: time="2023-10-02T19:59:46.578381399Z" level=error msg="StartContainer for \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:59:46.578811 kubelet[1445]: E1002 19:59:46.578604 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56" Oct 2 19:59:46.578811 kubelet[1445]: E1002 19:59:46.578727 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:59:46.578811 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:59:46.578811 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 19:59:46.578989 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4px62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:59:46.579046 kubelet[1445]: E1002 19:59:46.578786 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:46.745728 kubelet[1445]: I1002 19:59:46.745642 1445 scope.go:115] "RemoveContainer" containerID="fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017" Oct 2 19:59:46.746168 kubelet[1445]: I1002 19:59:46.746062 1445 scope.go:115] "RemoveContainer" containerID="fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017" Oct 2 19:59:46.749817 env[1141]: time="2023-10-02T19:59:46.749773346Z" level=info msg="RemoveContainer for \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\"" Oct 2 19:59:46.750030 env[1141]: time="2023-10-02T19:59:46.750001067Z" level=info msg="RemoveContainer for \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\"" Oct 2 19:59:46.750116 env[1141]: time="2023-10-02T19:59:46.750082868Z" level=error msg="RemoveContainer for \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\" failed" error="failed to set removing state for container \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\": container is already in removing state" Oct 2 19:59:46.750286 kubelet[1445]: E1002 19:59:46.750261 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\": container is already in removing state" containerID="fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017" Oct 2 19:59:46.750333 kubelet[1445]: I1002 19:59:46.750310 1445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017} err="rpc error: code = Unknown desc = failed to set removing state for container \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\": container is already in removing state" Oct 2 19:59:46.751986 env[1141]: time="2023-10-02T19:59:46.751952881Z" level=info msg="RemoveContainer for \"fbfc775e19ff219a5479f02ab869c68d911b0eef1803405ae309c44d693c8017\" returns successfully" Oct 2 19:59:46.752209 kubelet[1445]: E1002 19:59:46.752187 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:46.752492 kubelet[1445]: E1002 19:59:46.752478 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:47.363850 kubelet[1445]: E1002 19:59:47.363804 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:47.407125 systemd[1]: run-containerd-runc-k8s.io-ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56-runc.tGzMgU.mount: Deactivated successfully. Oct 2 19:59:47.407224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56-rootfs.mount: Deactivated successfully. Oct 2 19:59:48.364372 kubelet[1445]: E1002 19:59:48.364332 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.217355 kubelet[1445]: E1002 19:59:49.217296 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.327981 kubelet[1445]: E1002 19:59:49.327947 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:49.365403 kubelet[1445]: E1002 19:59:49.365375 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:49.673637 kubelet[1445]: W1002 19:59:49.673599 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de52f76_f8ad_43ae_9ef4_27ad0d637d5c.slice/cri-containerd-ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56.scope WatchSource:0}: task ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56 not found: not found Oct 2 19:59:50.366024 kubelet[1445]: E1002 19:59:50.365971 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:51.366756 kubelet[1445]: E1002 19:59:51.366711 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:52.367264 kubelet[1445]: E1002 19:59:52.367226 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:53.368684 kubelet[1445]: E1002 19:59:53.368638 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:54.329571 kubelet[1445]: E1002 19:59:54.329546 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:54.369052 kubelet[1445]: E1002 19:59:54.369014 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:55.369748 kubelet[1445]: E1002 19:59:55.369701 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:56.370264 kubelet[1445]: E1002 19:59:56.370219 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:57.371296 kubelet[1445]: E1002 19:59:57.371242 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.371726 kubelet[1445]: E1002 19:59:58.371680 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:59:58.400057 kubelet[1445]: E1002 19:59:58.399981 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:59:58.400505 kubelet[1445]: E1002 19:59:58.400484 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 19:59:59.330210 kubelet[1445]: E1002 19:59:59.330173 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:59:59.372572 kubelet[1445]: E1002 19:59:59.372535 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:00.372993 kubelet[1445]: E1002 20:00:00.372938 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:01.373731 kubelet[1445]: E1002 20:00:01.373678 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:02.373919 kubelet[1445]: E1002 20:00:02.373831 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:03.374686 kubelet[1445]: E1002 20:00:03.374649 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:04.331640 kubelet[1445]: E1002 20:00:04.331601 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:04.376062 kubelet[1445]: E1002 20:00:04.376031 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:05.377361 kubelet[1445]: E1002 20:00:05.377241 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:06.378318 kubelet[1445]: E1002 20:00:06.378274 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:07.379007 kubelet[1445]: E1002 20:00:07.378964 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:08.379579 kubelet[1445]: E1002 20:00:08.379542 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:09.216333 kubelet[1445]: E1002 20:00:09.216256 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:09.226969 env[1141]: time="2023-10-02T20:00:09.226925093Z" level=info msg="StopPodSandbox for \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\"" Oct 2 20:00:09.227272 env[1141]: time="2023-10-02T20:00:09.227012094Z" level=info msg="TearDown network for sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" successfully" Oct 2 20:00:09.227272 env[1141]: time="2023-10-02T20:00:09.227052774Z" level=info msg="StopPodSandbox for \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" returns successfully" Oct 2 20:00:09.228636 env[1141]: time="2023-10-02T20:00:09.227710497Z" level=info msg="RemovePodSandbox for \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\"" Oct 2 20:00:09.228636 env[1141]: time="2023-10-02T20:00:09.227738137Z" level=info msg="Forcibly stopping sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\"" Oct 2 20:00:09.228636 env[1141]: time="2023-10-02T20:00:09.227798098Z" level=info msg="TearDown network for sandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" successfully" Oct 2 20:00:09.230066 env[1141]: time="2023-10-02T20:00:09.229943948Z" level=info msg="RemovePodSandbox \"914f2f63e420899efc2048b67f3c52c8971cb82d30c44753e7384a6bae9804ef\" returns successfully" Oct 2 20:00:09.332220 kubelet[1445]: E1002 20:00:09.332184 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:09.380335 kubelet[1445]: E1002 20:00:09.380293 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:10.381305 kubelet[1445]: E1002 20:00:10.381266 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:11.381809 kubelet[1445]: E1002 20:00:11.381767 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.382714 kubelet[1445]: E1002 20:00:12.382679 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:12.399252 kubelet[1445]: E1002 20:00:12.399215 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:12.401220 env[1141]: time="2023-10-02T20:00:12.401169088Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:00:12.410382 env[1141]: time="2023-10-02T20:00:12.410334610Z" level=info msg="CreateContainer within sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\"" Oct 2 20:00:12.410767 env[1141]: time="2023-10-02T20:00:12.410738572Z" level=info msg="StartContainer for \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\"" Oct 2 20:00:12.426908 systemd[1]: run-containerd-runc-k8s.io-dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d-runc.ReBKJb.mount: Deactivated successfully. Oct 2 20:00:12.428561 systemd[1]: Started cri-containerd-dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d.scope. Oct 2 20:00:12.450930 systemd[1]: cri-containerd-dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d.scope: Deactivated successfully. Oct 2 20:00:12.460437 env[1141]: time="2023-10-02T20:00:12.460391517Z" level=info msg="shim disconnected" id=dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d Oct 2 20:00:12.460676 env[1141]: time="2023-10-02T20:00:12.460656159Z" level=warning msg="cleaning up after shim disconnected" id=dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d namespace=k8s.io Oct 2 20:00:12.460754 env[1141]: time="2023-10-02T20:00:12.460741799Z" level=info msg="cleaning up dead shim" Oct 2 20:00:12.468659 env[1141]: time="2023-10-02T20:00:12.468606235Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2281 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:00:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:00:12.468920 env[1141]: time="2023-10-02T20:00:12.468866396Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:00:12.472386 env[1141]: time="2023-10-02T20:00:12.472336132Z" level=error msg="Failed to pipe stdout of container \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\"" error="reading from a closed fifo" Oct 2 20:00:12.472459 env[1141]: time="2023-10-02T20:00:12.472364172Z" level=error msg="Failed to pipe stderr of container \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\"" error="reading from a closed fifo" Oct 2 20:00:12.474126 env[1141]: time="2023-10-02T20:00:12.474086540Z" level=error msg="StartContainer for \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:00:12.474453 kubelet[1445]: E1002 20:00:12.474422 1445 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d" Oct 2 20:00:12.474593 kubelet[1445]: E1002 20:00:12.474564 1445 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:00:12.474593 kubelet[1445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:00:12.474593 kubelet[1445]: rm /hostbin/cilium-mount Oct 2 20:00:12.474593 kubelet[1445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4px62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:00:12.474731 kubelet[1445]: E1002 20:00:12.474608 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 20:00:12.789210 kubelet[1445]: I1002 20:00:12.789034 1445 scope.go:115] "RemoveContainer" containerID="ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56" Oct 2 20:00:12.789407 kubelet[1445]: I1002 20:00:12.789390 1445 scope.go:115] "RemoveContainer" containerID="ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56" Oct 2 20:00:12.791960 env[1141]: time="2023-10-02T20:00:12.791790744Z" level=info msg="RemoveContainer for \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\"" Oct 2 20:00:12.792046 env[1141]: time="2023-10-02T20:00:12.792013745Z" level=info msg="RemoveContainer for \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\"" Oct 2 20:00:12.792166 env[1141]: time="2023-10-02T20:00:12.792092265Z" level=error msg="RemoveContainer for \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\" failed" error="failed to set removing state for container \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\": container is already in removing state" Oct 2 20:00:12.792887 kubelet[1445]: E1002 20:00:12.792308 1445 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\": container is already in removing state" containerID="ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56" Oct 2 20:00:12.792887 kubelet[1445]: E1002 20:00:12.792339 1445 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56": container is already in removing state; Skipping pod "cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)" Oct 2 20:00:12.792887 kubelet[1445]: E1002 20:00:12.792390 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:12.792887 kubelet[1445]: E1002 20:00:12.792589 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 20:00:12.794025 env[1141]: time="2023-10-02T20:00:12.793996634Z" level=info msg="RemoveContainer for \"ebff769d90871ab9ef2a1cfb1a633f128e473df52632efc0a23bfa4aea7c4e56\" returns successfully" Oct 2 20:00:13.384150 kubelet[1445]: E1002 20:00:13.384103 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:13.408500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d-rootfs.mount: Deactivated successfully. Oct 2 20:00:14.334052 kubelet[1445]: E1002 20:00:14.334026 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:14.385032 kubelet[1445]: E1002 20:00:14.384964 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:15.385088 kubelet[1445]: E1002 20:00:15.385050 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:15.568320 kubelet[1445]: W1002 20:00:15.568262 1445 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de52f76_f8ad_43ae_9ef4_27ad0d637d5c.slice/cri-containerd-dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d.scope WatchSource:0}: task dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d not found: not found Oct 2 20:00:16.385682 kubelet[1445]: E1002 20:00:16.385603 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:17.386816 kubelet[1445]: E1002 20:00:17.386731 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:18.387331 kubelet[1445]: E1002 20:00:18.387260 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:19.334967 kubelet[1445]: E1002 20:00:19.334927 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:19.387394 kubelet[1445]: E1002 20:00:19.387353 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:20.388249 kubelet[1445]: E1002 20:00:20.388219 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.389125 kubelet[1445]: E1002 20:00:21.389087 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:21.398905 kubelet[1445]: E1002 20:00:21.398866 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:22.390161 kubelet[1445]: E1002 20:00:22.390115 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:23.390978 kubelet[1445]: E1002 20:00:23.390918 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:24.336489 kubelet[1445]: E1002 20:00:24.336455 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:24.391688 kubelet[1445]: E1002 20:00:24.391636 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:25.392744 kubelet[1445]: E1002 20:00:25.392687 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.393753 kubelet[1445]: E1002 20:00:26.393702 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:26.398680 kubelet[1445]: E1002 20:00:26.398661 1445 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 20:00:26.398902 kubelet[1445]: E1002 20:00:26.398881 1445 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lk75f_kube-system(8de52f76-f8ad-43ae-9ef4-27ad0d637d5c)\"" pod="kube-system/cilium-lk75f" podUID=8de52f76-f8ad-43ae-9ef4-27ad0d637d5c Oct 2 20:00:27.394583 kubelet[1445]: E1002 20:00:27.394540 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:28.394663 kubelet[1445]: E1002 20:00:28.394621 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:29.216340 kubelet[1445]: E1002 20:00:29.216291 1445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:29.337684 kubelet[1445]: E1002 20:00:29.337659 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:29.396134 kubelet[1445]: E1002 20:00:29.396103 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:30.396353 kubelet[1445]: E1002 20:00:30.396298 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:31.397494 kubelet[1445]: E1002 20:00:31.397434 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:32.397965 kubelet[1445]: E1002 20:00:32.397756 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:32.869518 env[1141]: time="2023-10-02T20:00:32.869475284Z" level=info msg="StopPodSandbox for \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\"" Oct 2 20:00:32.871232 env[1141]: time="2023-10-02T20:00:32.869553684Z" level=info msg="Container to stop \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:32.870807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544-shm.mount: Deactivated successfully. Oct 2 20:00:32.877267 env[1141]: time="2023-10-02T20:00:32.877094949Z" level=info msg="StopContainer for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" with timeout 30 (s)" Oct 2 20:00:32.877551 env[1141]: time="2023-10-02T20:00:32.877470190Z" level=info msg="Stop container \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" with signal terminated" Oct 2 20:00:32.879398 systemd[1]: cri-containerd-45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544.scope: Deactivated successfully. Oct 2 20:00:32.880953 kernel: kauditd_printk_skb: 166 callbacks suppressed Oct 2 20:00:32.881033 kernel: audit: type=1334 audit(1696276832.878:707): prog-id=76 op=UNLOAD Oct 2 20:00:32.878000 audit: BPF prog-id=76 op=UNLOAD Oct 2 20:00:32.883000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:00:32.885300 kernel: audit: type=1334 audit(1696276832.883:708): prog-id=79 op=UNLOAD Oct 2 20:00:32.896766 systemd[1]: cri-containerd-d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7.scope: Deactivated successfully. Oct 2 20:00:32.895000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:00:32.898307 kernel: audit: type=1334 audit(1696276832.895:709): prog-id=80 op=UNLOAD Oct 2 20:00:32.905640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.907361 kernel: audit: type=1334 audit(1696276832.905:710): prog-id=83 op=UNLOAD Oct 2 20:00:32.905000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:00:32.911977 env[1141]: time="2023-10-02T20:00:32.911937623Z" level=info msg="shim disconnected" id=45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544 Oct 2 20:00:32.912578 env[1141]: time="2023-10-02T20:00:32.912555385Z" level=warning msg="cleaning up after shim disconnected" id=45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544 namespace=k8s.io Oct 2 20:00:32.912673 env[1141]: time="2023-10-02T20:00:32.912658545Z" level=info msg="cleaning up dead shim" Oct 2 20:00:32.920574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.924382 env[1141]: time="2023-10-02T20:00:32.924351463Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2326 runtime=io.containerd.runc.v2\n" Oct 2 20:00:32.924648 env[1141]: time="2023-10-02T20:00:32.924415304Z" level=info msg="shim disconnected" id=d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7 Oct 2 20:00:32.924695 env[1141]: time="2023-10-02T20:00:32.924647744Z" level=warning msg="cleaning up after shim disconnected" id=d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7 namespace=k8s.io Oct 2 20:00:32.924695 env[1141]: time="2023-10-02T20:00:32.924657384Z" level=info msg="cleaning up dead shim" Oct 2 20:00:32.924969 env[1141]: time="2023-10-02T20:00:32.924933425Z" level=info msg="TearDown network for sandbox \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" successfully" Oct 2 20:00:32.925066 env[1141]: time="2023-10-02T20:00:32.925047586Z" level=info msg="StopPodSandbox for \"45c09315ddd84fc46175bcfa8597e98bed3d41f0559c2cde39b5ca1affe6a544\" returns successfully" Oct 2 20:00:32.933407 env[1141]: time="2023-10-02T20:00:32.933356973Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2344 runtime=io.containerd.runc.v2\n" Oct 2 20:00:32.934926 env[1141]: time="2023-10-02T20:00:32.934898458Z" level=info msg="StopContainer for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" returns successfully" Oct 2 20:00:32.935356 env[1141]: time="2023-10-02T20:00:32.935332059Z" level=info msg="StopPodSandbox for \"06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5\"" Oct 2 20:00:32.935427 env[1141]: time="2023-10-02T20:00:32.935379740Z" level=info msg="Container to stop \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:00:32.936470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5-shm.mount: Deactivated successfully. Oct 2 20:00:32.943389 systemd[1]: cri-containerd-06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5.scope: Deactivated successfully. Oct 2 20:00:32.942000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:00:32.945317 kernel: audit: type=1334 audit(1696276832.942:711): prog-id=72 op=UNLOAD Oct 2 20:00:32.945000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:00:32.947314 kernel: audit: type=1334 audit(1696276832.945:712): prog-id=75 op=UNLOAD Oct 2 20:00:32.971131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5-rootfs.mount: Deactivated successfully. Oct 2 20:00:32.975193 env[1141]: time="2023-10-02T20:00:32.975143230Z" level=info msg="shim disconnected" id=06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5 Oct 2 20:00:32.975472 env[1141]: time="2023-10-02T20:00:32.975203070Z" level=warning msg="cleaning up after shim disconnected" id=06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5 namespace=k8s.io Oct 2 20:00:32.975472 env[1141]: time="2023-10-02T20:00:32.975213510Z" level=info msg="cleaning up dead shim" Oct 2 20:00:32.983370 env[1141]: time="2023-10-02T20:00:32.983329777Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:00:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2375 runtime=io.containerd.runc.v2\n" Oct 2 20:00:32.983665 env[1141]: time="2023-10-02T20:00:32.983638818Z" level=info msg="TearDown network for sandbox \"06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5\" successfully" Oct 2 20:00:32.983722 env[1141]: time="2023-10-02T20:00:32.983664978Z" level=info msg="StopPodSandbox for \"06c2a517510a3012773cc7df9ae8e642376a82d12fcf8f38790419bff729b2c5\" returns successfully" Oct 2 20:00:33.045965 kubelet[1445]: I1002 20:00:33.045896 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-ipsec-secrets\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.045965 kubelet[1445]: I1002 20:00:33.045948 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-lib-modules\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.045965 kubelet[1445]: I1002 20:00:33.045969 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-kernel\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046240 kubelet[1445]: I1002 20:00:33.045989 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-bpf-maps\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046240 kubelet[1445]: I1002 20:00:33.046009 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-run\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046240 kubelet[1445]: I1002 20:00:33.046025 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-cgroup\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046240 kubelet[1445]: I1002 20:00:33.046043 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-net\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046240 kubelet[1445]: I1002 20:00:33.046063 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-clustermesh-secrets\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046240 kubelet[1445]: I1002 20:00:33.046080 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-xtables-lock\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046415 kubelet[1445]: I1002 20:00:33.046102 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-config-path\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046415 kubelet[1445]: I1002 20:00:33.046119 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hostproc\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046415 kubelet[1445]: I1002 20:00:33.046137 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hubble-tls\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046415 kubelet[1445]: I1002 20:00:33.046161 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-etc-cni-netd\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046415 kubelet[1445]: I1002 20:00:33.046185 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4px62\" (UniqueName: \"kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-kube-api-access-4px62\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046415 kubelet[1445]: I1002 20:00:33.046206 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cni-path\") pod \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\" (UID: \"8de52f76-f8ad-43ae-9ef4-27ad0d637d5c\") " Oct 2 20:00:33.046561 kubelet[1445]: I1002 20:00:33.046246 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cni-path" (OuterVolumeSpecName: "cni-path") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046561 kubelet[1445]: I1002 20:00:33.046265 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hostproc" (OuterVolumeSpecName: "hostproc") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046561 kubelet[1445]: I1002 20:00:33.046323 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046561 kubelet[1445]: I1002 20:00:33.046467 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046561 kubelet[1445]: I1002 20:00:33.046470 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046681 kubelet[1445]: I1002 20:00:33.046490 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046681 kubelet[1445]: I1002 20:00:33.046499 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046681 kubelet[1445]: W1002 20:00:33.046463 1445 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:33.046681 kubelet[1445]: I1002 20:00:33.046512 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046681 kubelet[1445]: I1002 20:00:33.046528 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.046799 kubelet[1445]: I1002 20:00:33.046544 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:00:33.048586 kubelet[1445]: I1002 20:00:33.048551 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:33.049582 kubelet[1445]: I1002 20:00:33.049547 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:33.049657 kubelet[1445]: I1002 20:00:33.049624 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:00:33.050129 kubelet[1445]: I1002 20:00:33.050105 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:33.051646 kubelet[1445]: I1002 20:00:33.051617 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-kube-api-access-4px62" (OuterVolumeSpecName: "kube-api-access-4px62") pod "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c" (UID: "8de52f76-f8ad-43ae-9ef4-27ad0d637d5c"). InnerVolumeSpecName "kube-api-access-4px62". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:33.148400 kubelet[1445]: I1002 20:00:33.147250 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdtdk\" (UniqueName: \"kubernetes.io/projected/1e3dd315-24f6-4a31-b945-a09b09949520-kube-api-access-bdtdk\") pod \"1e3dd315-24f6-4a31-b945-a09b09949520\" (UID: \"1e3dd315-24f6-4a31-b945-a09b09949520\") " Oct 2 20:00:33.148517 kubelet[1445]: I1002 20:00:33.148410 1445 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3dd315-24f6-4a31-b945-a09b09949520-cilium-config-path\") pod \"1e3dd315-24f6-4a31-b945-a09b09949520\" (UID: \"1e3dd315-24f6-4a31-b945-a09b09949520\") " Oct 2 20:00:33.148517 kubelet[1445]: I1002 20:00:33.148481 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148609 kubelet[1445]: I1002 20:00:33.148592 1445 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148645 kubelet[1445]: I1002 20:00:33.148614 1445 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148645 kubelet[1445]: I1002 20:00:33.148625 1445 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148645 kubelet[1445]: I1002 20:00:33.148637 1445 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4px62\" (UniqueName: \"kubernetes.io/projected/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-kube-api-access-4px62\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148645 kubelet[1445]: I1002 20:00:33.148647 1445 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148772 kubelet[1445]: I1002 20:00:33.148684 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-ipsec-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148772 kubelet[1445]: I1002 20:00:33.148697 1445 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148772 kubelet[1445]: I1002 20:00:33.148721 1445 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148772 kubelet[1445]: I1002 20:00:33.148734 1445 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148772 kubelet[1445]: I1002 20:00:33.148771 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148891 kubelet[1445]: I1002 20:00:33.148785 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148891 kubelet[1445]: I1002 20:00:33.148794 1445 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148891 kubelet[1445]: I1002 20:00:33.148803 1445 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.148891 kubelet[1445]: I1002 20:00:33.148813 1445 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8de52f76-f8ad-43ae-9ef4-27ad0d637d5c-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.149381 kubelet[1445]: W1002 20:00:33.149347 1445 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1e3dd315-24f6-4a31-b945-a09b09949520/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:00:33.151355 kubelet[1445]: I1002 20:00:33.151329 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3dd315-24f6-4a31-b945-a09b09949520-kube-api-access-bdtdk" (OuterVolumeSpecName: "kube-api-access-bdtdk") pod "1e3dd315-24f6-4a31-b945-a09b09949520" (UID: "1e3dd315-24f6-4a31-b945-a09b09949520"). InnerVolumeSpecName "kube-api-access-bdtdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:00:33.153386 kubelet[1445]: I1002 20:00:33.153353 1445 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3dd315-24f6-4a31-b945-a09b09949520-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e3dd315-24f6-4a31-b945-a09b09949520" (UID: "1e3dd315-24f6-4a31-b945-a09b09949520"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:00:33.249721 kubelet[1445]: I1002 20:00:33.249678 1445 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-bdtdk\" (UniqueName: \"kubernetes.io/projected/1e3dd315-24f6-4a31-b945-a09b09949520-kube-api-access-bdtdk\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.249721 kubelet[1445]: I1002 20:00:33.249713 1445 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3dd315-24f6-4a31-b945-a09b09949520-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 20:00:33.398538 kubelet[1445]: E1002 20:00:33.398440 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:00:33.406476 systemd[1]: Removed slice kubepods-besteffort-pod1e3dd315_24f6_4a31_b945_a09b09949520.slice. Oct 2 20:00:33.407404 systemd[1]: Removed slice kubepods-burstable-pod8de52f76_f8ad_43ae_9ef4_27ad0d637d5c.slice. Oct 2 20:00:33.822726 kubelet[1445]: I1002 20:00:33.822631 1445 scope.go:115] "RemoveContainer" containerID="dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d" Oct 2 20:00:33.824256 env[1141]: time="2023-10-02T20:00:33.824213087Z" level=info msg="RemoveContainer for \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\"" Oct 2 20:00:33.827295 env[1141]: time="2023-10-02T20:00:33.827250857Z" level=info msg="RemoveContainer for \"dd5c75824104f4ea6235567aec3debbdf3402858fe6e42673f6bf87f02807f7d\" returns successfully" Oct 2 20:00:33.827504 kubelet[1445]: I1002 20:00:33.827486 1445 scope.go:115] "RemoveContainer" containerID="d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7" Oct 2 20:00:33.828524 env[1141]: time="2023-10-02T20:00:33.828479101Z" level=info msg="RemoveContainer for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\"" Oct 2 20:00:33.830697 env[1141]: time="2023-10-02T20:00:33.830649388Z" level=info msg="RemoveContainer for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" returns successfully" Oct 2 20:00:33.830898 kubelet[1445]: I1002 20:00:33.830880 1445 scope.go:115] "RemoveContainer" containerID="d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7" Oct 2 20:00:33.831216 env[1141]: time="2023-10-02T20:00:33.831083629Z" level=error msg="ContainerStatus for \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\": not found" Oct 2 20:00:33.831295 kubelet[1445]: E1002 20:00:33.831261 1445 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\": not found" containerID="d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7" Oct 2 20:00:33.831350 kubelet[1445]: I1002 20:00:33.831298 1445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7} err="failed to get container status \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5bcdaaaeebb6d46e867ef891f2897ef468c18e6307861a227b8eeb5750b13b7\": not found" Oct 2 20:00:33.870850 systemd[1]: var-lib-kubelet-pods-8de52f76\x2df8ad\x2d43ae\x2d9ef4\x2d27ad0d637d5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4px62.mount: Deactivated successfully. Oct 2 20:00:33.870955 systemd[1]: var-lib-kubelet-pods-1e3dd315\x2d24f6\x2d4a31\x2db945\x2da09b09949520-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbdtdk.mount: Deactivated successfully. Oct 2 20:00:33.871009 systemd[1]: var-lib-kubelet-pods-8de52f76\x2df8ad\x2d43ae\x2d9ef4\x2d27ad0d637d5c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:33.871056 systemd[1]: var-lib-kubelet-pods-8de52f76\x2df8ad\x2d43ae\x2d9ef4\x2d27ad0d637d5c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:00:33.871100 systemd[1]: var-lib-kubelet-pods-8de52f76\x2df8ad\x2d43ae\x2d9ef4\x2d27ad0d637d5c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:00:34.339130 kubelet[1445]: E1002 20:00:34.339070 1445 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:00:34.399838 kubelet[1445]: E1002 20:00:34.399768 1445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"